Dataset Preview
paper_id (string)  summaries (json)  abstractText (string)  authors (json)  references (json)  sections (json)  year (int)  title (string) 

SP:4d08cdb2de2044bcb574a425b42963b83fbebfbc
 [
"This paper investigates kernel ridgeless regression from a stability viewpoint by deriving its risk bounds. Using stability arguments to derive risk bounds have been widely adopting in machine learning. However, related studies on kernel ridgeless regression are still sparse. The present study fills this gap, which, in my opinion, is also one of the main contributions of the present study. "
]
 We study the average CVloo stability of kernel ridgeless regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm minimizes a bound on CVloo stability, which in turn is controlled by the condition number of the empirical kernel matrix. The latter can be characterized in the asymptotic regime where both the dimension and cardinality of the data go to infinity. Under the assumption of random kernel matrices, the corresponding test error should be expected to follow a double descent curve.
 []
 [
{
"authors": [
"Jerzy K Baksalary",
"Oskar Maria Baksalary",
"Götz Trenkler"
],
"title": "A revisitation of formulae for the moore–penrose inverse of modified matrices",
"venue": "Linear Algebra and Its Applications,",
"year": 2003
},
{
"authors": [
"Peter L. Bartlett",
"Philip M. Long",
"Gábor Lugosi",
"Alexander Tsigler"
],
"title": "Benign overfitting in linear regression",
"venue": "CoRR, abs/1906.11300,",
"year": 2019
},
{
"authors": [
"Mikhail Belkin",
"Daniel Hsu",
"Siyuan Ma",
"Soumik Mandal"
],
"title": "Reconciling modern machinelearning practice and the classical bias–variance tradeoff",
"venue": "Proceedings of the National Academy of Sciences,",
"year": 2019
},
{
"authors": [
"Stéphane Boucheron",
"Olivier Bousquet",
"Gábor Lugosi"
],
"title": "Theory of classification: A survey of some recent advances",
"venue": "ESAIM: probability and statistics,",
"year": 2005
},
{
"authors": [
"O. Bousquet",
"A. Elisseeff"
],
"title": "Stability and generalization",
"venue": "Journal Machine Learning Research,",
"year": 2001
},
{
"authors": [
"Peter Bühlmann",
"Sara Van De Geer"
],
"title": "Statistics for highdimensional data: methods, theory and applications",
"venue": "Springer Science & Business Media,",
"year": 2011
},
{
"authors": [
"Noureddine El Karoui"
],
"title": "The spectrum of kernel random matrices",
"venue": "arXiv eprints, art",
"year": 2010
},
{
"authors": [
"Trevor Hastie",
"Andrea Montanari",
"Saharon Rosset",
"Ryan J. Tibshirani"
],
"title": "Surprises in HighDimensional Ridgeless Least Squares Interpolation",
"venue": "arXiv eprints, art",
"year": 2019
},
{
"authors": [
"S. Kutin",
"P. Niyogi"
],
"title": "Almosteverywhere algorithmic stability and generalization error",
"venue": "Technical report TR200203,",
"year": 2002
},
{
"authors": [
"Tengyuan Liang",
"Alexander Rakhlin",
"Xiyu Zhai"
],
"title": "On the Risk of MinimumNorm Interpolants and Restricted Lower Isometry of Kernels",
"venue": "arXiv eprints, art",
"year": 2019
},
{
"authors": [
"Tengyuan Liang",
"Alexander Rakhlin"
],
"title": "Just interpolate: Kernel “ridgeless” regression can generalize",
"venue": "Annals of Statistics,",
"year": 2020
},
{
"authors": [
"V.A. Marchenko",
"L.A. Pastur"
],
"title": "Distribution of eigenvalues for some sets of random matrices",
"venue": "Mat. Sb. (N.S.),",
"year": 1967
},
{
"authors": [
"Song Mei",
"Andrea Montanari"
],
"title": "The generalization error of random features regression: Precise asymptotics and double descent curve",
"venue": "arXiv eprints, art",
"year": 2019
},
{
"authors": [
"Carl Meyer"
],
"title": "Generalized inversion of modified matrices",
"venue": "SIAM J. Applied Math,",
"year": 1973
},
{
"authors": [
"C.A. Micchelli"
],
"title": "Interpolation of scattered data: distance matrices and conditionally positive definite functions",
"venue": "Constructive Approximation,",
"year": 1986
},
{
"authors": [
"Sayan Mukherjee",
"Partha Niyogi",
"Tomaso Poggio",
"Ryan Rifkin"
],
"title": "Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization",
"venue": "Advances in Computational Mathematics,",
"year": 2006
},
{
"authors": [
"T. Poggio",
"R. Rifkin",
"S. Mukherjee",
"P. Niyogi"
],
"title": "General conditions for predictivity in learning theory",
"venue": "Nature,",
"year": 2004
},
{
"authors": [
"T. Poggio",
"G. Kur",
"A. Banburski"
],
"title": "Double descent in the condition number",
"venue": "Technical report, MIT Center for Brains Minds and Machines,",
"year": 2019
},
{
"authors": [
"Tomaso Poggio"
],
"title": "Stable foundations for learning. Center for Brains, Minds and Machines",
"venue": "(CBMM) Memo No",
"year": 2020
},
{
"authors": [
"Alexander Rakhlin",
"Xiyu Zhai"
],
"title": "Consistency of Interpolation with Laplace Kernels is a HighDimensional Phenomenon",
"venue": "arXiv eprints, art",
"year": 2018
},
{
"authors": [
"Lorenzo Rosasco",
"Silvia Villa"
],
"title": "Learning with incremental iterative regularization",
"venue": "Advances in Neural Information Processing Systems",
"year": 2015
},
{
"authors": [
"Shai ShalevShwartz",
"Shai BenDavid"
],
"title": "Understanding Machine Learning: From Theory to Algorithms",
"venue": null,
"year": 2014
},
{
"authors": [
"Shai ShalevShwartz",
"Ohad Shamir",
"Nathan Srebro",
"Karthik Sridharan"
],
"title": "Learnability, stability and uniform convergence",
"venue": "J. Mach. Learn. Res.,",
"year": 2010
},
{
"authors": [
"Ingo Steinwart",
"Andreas Christmann"
],
"title": "Support vector machines",
"venue": "Springer Science & Business Media,",
"year": 2008
},
{
"authors": [
"Chiyuan Zhang",
"Samy Bengio",
"Moritz Hardt",
"Benjamin Recht",
"Oriol Vinyals"
],
"title": "Understanding deep learning requires rethinking generalization",
"venue": "In 5th International Conference on Learning Representations,",
"year": 2017
},
{
"authors": [
"CV"
],
"title": "Note that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi",
"venue": null,
"year": 2006
},
{
"authors": [
"Mukherjee"
],
"title": "Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM",
"venue": null,
"year": 2002
},
{
"authors": [
"Mukherjee"
],
"title": "For ERM and bounded loss functions, CVloo stability in probability with β",
"venue": null,
"year": 2006
},
{
"authors": [
"Mukherjee"
],
"title": "zi)− V (fS",
"venue": null,
"year": 2006
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Statistical learning theory studies the learning properties of machine learning algorithms, and more fundamentally, the conditions under which learning from finite data is possible. In this context, classical learning theory focuses on the size of the hypothesis space in terms of different complexity measures, such as combinatorial dimensions, covering numbers and Rademacher/Gaussian complexities (ShalevShwartz & BenDavid, 2014; Boucheron et al., 2005). Another more recent approach is based on defining suitable notions of stability with respect to perturbation of the data (Bousquet & Elisseeff, 2001; Kutin & Niyogi, 2002). In this view, the continuity of the process that maps data to estimators is crucial, rather than the complexity of the hypothesis space. Different notions of stability can be considered, depending on the data perturbation and metric considered (Kutin & Niyogi, 2002). Interestingly, the stability and complexity approaches to characterizing the learnability of problems are not at odds with each other, and can be shown to be equivalent as shown in Poggio et al. (2004) and ShalevShwartz et al. (2010).\nIn modern machine learning overparameterized models, with a larger number of parameters than the size of the training data, have become common. The ability of these models to generalize is well explained by classical statistical learning theory as long as some form of regularization is used in the training process (Bühlmann & Van De Geer, 2011; Steinwart & Christmann, 2008). However, it was recently shown  first for deep networks (Zhang et al., 2017), and more recently for kernel methods (Belkin et al., 2019)  that learning is possible in the absence of regularization, i.e., when perfectly fitting/interpolating the data. Much recent work in statistical learning theory has tried to find theoretical ground for this empirical finding. Since learning using models that interpolate is not exclusive to deep neural networks, we study generalization in the presence of interpolation in the case of kernel methods. We study both linear and kernel least squares problems in this paper.\nOur Contributions:\n• We characterize the generalization properties of interpolating solutions for linear and kernel least squares problems using a stability approach. While the (uniform) stability properties of regularized kernel methods are well known (Bousquet & Elisseeff, 2001), we study interpolating solutions of the unregularized (\"ridgeless\") regression problems.\n• We obtain an upper bound on the stability of interpolating solutions, and show that this upper bound is minimized by the minimum norm interpolating solution. This also means that among all interpolating solutions, the minimum norm solution has the best test error. In\nparticular, the same conclusion is also true for gradient descent, since it converges to the minimum norm solution in the setting we consider, see e.g. Rosasco & Villa (2015). • Our stability bounds show that the average stability of the minimum norm solution is\ncontrolled by the condition number of the empirical kernel matrix. It is well known that the numerical stability of the least squares solution is governed by the condition number of the associated kernel matrix (see the discussion of why overparametrization is “good” in Poggio et al. (2019)). Our results show that the condition number also controls stability (and hence, test error) in a statistical sense.\nOrganization: In section 2, we introduce basic ideas in statistical learning and empirical risk minimization, as well as the notation used in the rest of the paper. In section 3, we briefly recall some definitions of stability. In section 4, we study the stability of interpolating solutions to kernel least squares and show that the minimum norm solutions minimize an upper bound on the stability. In section 5 we discuss our results in the context of recent work on high dimensional regression. We conclude in section 6."
},
{
"heading": "2 STATISTICAL LEARNING AND EMPIRICAL RISK MINIMIZATION",
"text": "We begin by recalling the basic ideas in statistical learning theory. In this setting, X is the space of features, Y is the space of targets or labels, and there is an unknown probability distribution µ on the product space Z = X × Y . In the following, we consider X = Rd and Y = R. The distribution µ is fixed but unknown, and we are given a training set S consisting of n samples (thus S = n) drawn i.i.d. from the probability distribution on Zn, S = (zi)ni=1 = (xi, yi) n i=1. Intuitively, the goal of supervised learning is to use the training set S to “learn” a function fS that evaluated at a new value xnew should predict the associated value of ynew, i.e. ynew ≈ fS(xnew). The loss is a function V : F × Z → [0,∞), where F is the space of measurable functions from X to Y , that measures how well a function performs on a data point. We define a hypothesis space H ⊆ F where algorithms search for solutions. With the above notation, the expected risk of f is defined as I[f ] = EzV (f, z) which is the expected loss on a new sample drawn according to the data distribution µ. In this setting, statistical learning can be seen as the problem of finding an approximate minimizer of the expected risk given a training set S. A classical approach to derive an approximate solution is empirical risk minimization (ERM) where we minimize the empirical risk IS [f ] = 1 n ∑n i=1 V (f, zi).\nA natural error measure for our ERM solution fS is the expected excess risk ES [I[fS ]−minf∈H I[f ]]. Another common error measure is the expected generalization error/gap given by ES [I[fS ]− IS [fS ]]. These two error measures are closely related since, the expected excess risk is easily bounded by the expected generalization error (see Lemma 5)."
},
{
"heading": "2.1 KERNEL LEAST SQUARES AND MINIMUM NORM SOLUTION",
"text": "The focus in this paper is on the kernel least squares problem. We assume the loss function V is the square loss, that is, V (f, z) = (y − f(x))2. The hypothesis space is assumed to be a reproducing kernel Hilbert space, defined by a positive definite kernel K : X ×X → R or an associated feature map Φ : X → H, such that K(x,x′) = 〈Φ(x),Φ(x′)〉H for all x,x′ ∈ X , where 〈·, ·〉H is the inner product in H. In this setting, functions are linearly parameterized, that is there exists w ∈ H such that f(x) = 〈w,Φ(x)〉H for all x ∈ X . The ERM problem typically has multiple solutions, one of which is the minimum norm solution:\nf†S = arg min f∈M ‖f‖H , M = arg min f∈H\n1\nn n∑ i=1 (f(xi)− yi)2. (1)\nHere ‖·‖H is the norm onH induced by the inner product. The minimum norm solution can be shown to be unique and satisfy a representer theorem, that is for all x ∈ X:\nf†S(x) = n∑ i=1 K(x,xi)cS [i], cS = K †y (2)\nwhere cS = (cS [1], . . . , cS [n]),y = (y1 . . . yn) ∈ Rn, K is the n by n matrix with entries Kij = K(xi,xj), i, j = 1, . . . , n, and K† is the MoorePenrose pseudoinverse of K. If we assume n ≤ d and that we have n linearly independent data features, that is the rank of X is n, then it is possible to show that for many kernels one can replace K† by K−1 (see Remark 2). Note that invertibility is necessary and sufficient for interpolation. That is, if K is invertible, f†S(xi) = yi for all i = 1, . . . , n, in which case the training error in (1) is zero.\nRemark 1 (Pseudoinverse for underdetermined linear systems) A simple yet relevant example are linear functions f(x) = w>x, that correspond toH = Rd and Φ the identity map. If the rank of X ∈ Rd×n is n, then any interpolating solution wS satisfies w>S xi = yi for all i = 1, . . . , n, and the minimum norm solution, also called MoorePenrose solution, is given by (w†S)\n> = y>X† where the pseudoinverse X† takes the form X† = X>(XX>)−1.\nRemark 2 (Invertibility of translation invariant kernels) Translation invariant kernels are a family of kernel functions given by K(x1,x2) = k(x1 − x2) where k is an even function on Rd. Translation invariant kernels are Mercer kernels (positive semidefinite) if the Fourier transform of k(·) is nonnegative. For Radial Basis Function kernels (K(x1,x2) = k(x1 − x2)) we have the additional property due to Theorem 2.3 of Micchelli (1986) that for distinct points x1,x2, . . . ,xn ∈ Rd the kernel matrix K is nonsingular and thus invertible.\nThe above discussion is directly related to regularization approaches.\nRemark 3 (Stability and Tikhonov regularization) Tikhonov regularization is used to prevent potential unstable behaviors. In the above setting, it corresponds to replacing Problem (1) by minf∈H 1 n ∑n i=1(f(xi) − yi)2 + λ ‖f‖ 2 H where the corresponding unique solution is given by\nfλS (x) = ∑n i=1K(x,xi)c[i], c = (K + λIn)\n−1y. In contrast to ERM solutions, the above approach prevents interpolation. The properties of the corresponding estimator are well known. In this paper, we complement these results focusing on the case λ→ 0.\nFinally, we end by recalling the connection between minimum norm and the gradient descent.\nRemark 4 (Minimum norm and gradient descent) In our setting, it is well known that both batch and stochastic gradient iterations converge exactly to the minimum norm solution when multiple solutions exist, see e.g. Rosasco & Villa (2015). Thus, a study of the properties of the minimum norm solution explains the properties of the solution to which gradient descent converges. In particular, when ERM has multiple interpolating solutions, gradient descent converges to a solution that minimizes a bound on stability, as we show in this paper."
},
{
"heading": "3 ERROR BOUNDS VIA STABILITY",
"text": "In this section, we recall basic results relating the learning and stability properties of Empirical Risk Minimization (ERM). Throughout the paper, we assume that ERM achieves a minimum, albeit the extension to almost minimizer is possible (Mukherjee et al., 2006) and important for exponentialtype loss functions (Poggio, 2020). We do not assume the expected risk to achieve a minimum. Since we will be considering leaveoneout stability in this section, we look at solutions to ERM over the complete training set S = {z1, z2, . . . , zn} and the leave one out training set Si = {z1, z2, . . . , zi−1, zi+1, . . . , zn} The excess risk of ERM can be easily related to its stability properties. Here, we follow the definition laid out in Mukherjee et al. (2006) and say that an algorithm is CrossValidation leaveoneout (CVloo) stable in expectation, if there exists βCV > 0 such that for all i = 1, . . . , n,\nES [V (fSi , zi)− V (fS , zi)] ≤ βCV . (3) This definition is justified by the following result that bounds the excess risk of a learning algorithm by its average CVloo stability (ShalevShwartz et al., 2010; Mukherjee et al., 2006).\nLemma 5 (Excess Risk & CVloo Stability) For all i = 1, . . . , n, ES [I[fSi ]− inf\nf∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (4)\nRemark 6 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z V (fSi , z)− V (fS , z) ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nWe recall the proof of Lemma 5 in Appendix A.2 due to lack of space. In Appendix A, we also discuss other definitions of stability and their connections to concepts in statistical learning theory like generalization and learnability.\n4 CVloo STABILITY OF KERNEL LEAST SQUARES\nIn this section we analyze the expected CVloo stability of interpolating solutions to the kernel least squares problem, and obtain an upper bound on their stability. We show that this upper bound on the expected CVloo stability is smallest for the minimum norm interpolating solution (1) when compared to other interpolating solutions to the kernel least squares problem.\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping f ∈ H, that minimizes the empirical least squares risk. Here H is a reproducing kernel hilbert space (RKHS) defined by a positive definite kernel K : X × X → R. All interpolating solutions are of the form f̂S(·) =∑n j=1 ĉS [j]K(xj , ·), where ĉS = K†y + (I −K†K)v. Similarly, all interpolating solutions on\nthe leave one out dataset Si can be written as f̂Si(·) = ∑n j=1,j 6=i ĉSi [j]K(xj , ·), where ĉSi = K†Siyi + (I−K † Si KSi)vi. Here K,KSi are the empirical kernel matrices on the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nTheorem 7 (Main Theorem) Consider the kernel least squares problem with a bounded kernel and bounded outputs y, that is there exist κ,M > 0 such that\nK(x,x′) ≤ κ2, y ≤M, (5)\nalmost surely. Then for any interpolating solutions f̂Si , f̂S ,\nES [V (f̂Si , zi)− V (f̂S , zi)] ≤ βCV (K†,y,v,vi) (6) This bound βCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . For the minimum norm solutions we have\nβCV = C1β1 + C2β2, where β1 = ES [ K 12 opK†op × cond(K)× y ] and, β2 =\nES [ K 12 2opK†2op × (cond(K))2 × y2 ] , andC1, C2 are absolute constants that do not depend\non either d or n.\nIn the above theorem Kop refers to the operator norm of the kernel matrix K, y refers to the standard `2 norm for y ∈ Rn, and cond(K) is the condition number of the matrix K. We can combine the above result with Lemma 5 to obtain the following bound on excess risk for minimum norm interpolating solutions to the kernel least squares problem:\nCorollary 8 The excess risk of the minimum norm interpolating kernel least squares solution can be bounded as: ES [ I[f†Si ]− inff∈H I[f ] ] ≤ C1β1 + C2β2\nwhere β1, β2 are as defined previously.\nRemark 9 (Underdetermined Linear Regression) In the case of underdetermined linear regression, ie, linear regression where the dimensionality is larger than the number of samples in the training set, we can prove a version of Theorem 7 with β1 = ES [∥∥X†∥∥ op ‖y‖ ] and\nβ2 = ES [∥∥X†∥∥2 op ‖y‖2 ] . Due to space constraints, we present the proof of the results in the linear regression case in Appendix B."
},
{
"heading": "4.1 KEY LEMMAS",
"text": "In order to prove Theorem 7 we make use of the following lemmas to bound the CVloo stability using the norms and the difference of the solutions.\nLemma 10 Under assumption (5), for all i = 1. . . . , n, it holds that ES [V (f̂Si , zi)− V (f̂S , zi)] ≤ ES [( 2M + κ (∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H))× κ∥∥∥f̂S − f̂Si∥∥∥H]\nProof We begin, recalling that the square loss is locally Lipschitz, that is for all y, a, a′ ∈ R, with (y − a)2 − (y − a′)2 ≤ (2y+ a+ a′))a− a′.\nIf we apply this result to f, f ′ in a RKHSH, (y − f(x))2 − (y − f ′(x))2 ≤ κ(2M + κ (‖f‖H + ‖f ′‖H)) ‖f − f ′‖H .\nusing the basic properties of a RKHS that for all f ∈ H f(x) ≤ ‖f‖∞ = supxf(x) = supx〈f,Kx〉H ≤ κ ‖f‖H (7)\nIn particular, we can plug f̂Si and f̂S into the above inequality, and the almost positivity of ERM (Mukherjee et al., 2006) will allow us to drop the absolute value on the left hand side. Finally the desired result follows by taking the expectation over S.\nNow that we have bounded the CVloo stability using the norms and the difference of the solutions, we can find a bound on the difference between the solutions to the kernel least squares problem. This is our main stability estimate.\nLemma 11 Let f̂S , f̂Si be any interpolating kernel least squares solutions on the full and leave one out datasets (as defined at the top of this section), then ∥∥∥f̂S − f̂Si∥∥∥H ≤ BCV (K†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions f†S , f † Si . Also for some absolute constant C,∥∥∥f†S − f†Si∥∥∥H ≤ C × ∥∥∥K 12 ∥∥∥op ∥∥K†∥∥op × cond(K)× ‖y‖ (8) Since the minimum norm interpolating solutions minimize both ∥∥∥f̂S∥∥∥ H + ∥∥∥f̂Si∥∥∥H and ∥∥∥f̂S − f̂Si∥∥∥H (from lemmas 10, 11), we can put them together to prove theorem 7. In the following section we provide the proof of Lemma 11.\nRemark 12 (Zero training loss) In Lemma 10 we use the locally Lipschitz property of the squared loss function to bound the leave one out stability in terms of the difference between the norms of the solutions. Under interpolating conditions, if we set the term V (f̂S , zi) = 0, the leave one\nout stability reduces to ES [ V (f̂Si , zi)− V (f̂S , zi) ] = ES [ V (f̂Si , zi) ] = ES [(f̂Si(xi)− yi)2] =\nES [(f̂Si(xi)− f̂S(xi))2] = ES [〈f̂Si(·)− f̂S(·),Kxi(·)〉2] ≤ ES [ f̂S − f̂Si 2H × κ2 ] . We can plug\nin the bound from Lemma 11 to obtain similar qualitative and quantitative (up to constant factors) results as in Theorem 7.\nSimulation: In order to illustrate that the minimum norm interpolating solution is the best performing interpolating solution we ran a simple experiment on a linear regression problem. We synthetically generated data from a linear model y = w>X, where X ∈ Rd×n was i.i.d N (0, 1). The dimension of the data was d = 1000 and there were n = 200 samples in the training dataset. A held out dataset of 50 samples was used to compute the test mean squared error (MSE). Interpolating solutions were computed as ŵ> = y>X†+v>(I−XX†) and the norm of v was varied to obtain the plot. The results are shown in Figure 1, where we can see that the training loss is 0 for all interpolants, but test MSE increases as v increases, with (w†)> = y>X† having the best performance. The figure reports results averaged over 100 trials."
},
{
"heading": "4.2 PROOF OF LEMMA 11",
"text": "We can write any interpolating solution to the kernel regression problem as f̂S(x) =∑n i=1 ĉS [i]K(xi,x) where ĉS = K\n†y + (I − K†K)v, and K ∈ Rn×n is the kernel matrix K on S and v is any vector in Rn. i.e. Kij = K(xi,xj), and y ∈ Rn is the vector y = [y1 . . . yn]>. Similarly, the coefficient vector for the corresponding interpolating solution to the problem over the leave one out dataset Si is ĉSi = (KSi)\n†yi + (I− (KSi)†KSi)vi. Where yi = [y1, . . . , 0, . . . yn]> and KSi is the kernel matrix K with the i\nth row and column set to zero, which is the kernel matrix for the leave one out training set.\nWe define a = [−K(x1,xi), . . . ,−K(xn,xi)]> ∈ Rn and b ∈ Rn as a onehot column vector with all zeros apart from the ith component which is 1. Let a∗ = a +K(xi,xi)b. Then, we have:\nK∗ = K + ba > ∗\nKSi = K∗ + ab > (9)\nThat is, we can write KSi as a rank2 update to K. This can be verified by simple algebra, and using the fact that K is a symmetric kernel. Now we are interested in bounding f̂S − f̂Si H. For a function h(·) = ∑m i=1 piK(xi, ·) ∈ H we have hH = √ p>Kp = K 12p. So we have:\nf̂S − f̂Si H = K 1 2 (ĉS − ĉSi)\n= K 12 (K†y + (I−K†K)v − (KSi)†yi − (I− (KSi)†KSi)vi)\n= K 12 (K†y − (KSi)†y + yi(KSi)†b + (I−K†K)(v − vi) + (K†K− (KSi)†KSi)vi)\n= K 12 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi] (10)\nHere we make use of the fact that (KSi) †b = 0. If K has full rank (as in Remark 2), we see that b lies in the column space of K and a∗ lies in the column space of K>. Furthermore, β∗ = 1 +a>∗K †b = 1 +a>K†b+K(xi,xi)b >K†b = Kii(K †)ii 6= 0. Using equation 2.2 of Baksalary\net al. (2003) we obtain:\nK†∗ = K † − (Kii(K†)ii)−1K†ba>∗K†\n= K† − (Kii(K†)ii)−1K†ba>K† − ((K†)ii)−1K†bb>K†\n= K† + (Kii(K †)ii) −1K†bb> − ((K†)ii)−1K†bb>K† (11)\nHere we make use of the fact that a>K† = −b. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have K†∗K∗ = K†K.\nNext, we see that since K∗ has the same rank as K, a lies in the column space of K∗, and b lies in the column space of K>∗ . Furthermore β = 1 + b\n>K∗a = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (KSi) †, with k = K†∗a and h = b>K † ∗.\n(KSi) † = K†∗ − kk†K†∗ −K†∗h†h + (k†K†∗h†)kh\n=⇒ (KSi)† −K†∗ = (k†K†∗h†)kh− kk†K†∗ −K†∗h†h =⇒ (KSi)† −K†∗op ≤ 3K†∗op\n(12)\nAbove, we use the fact that the operator norm of a rank 1 matrix is given by uv>op = u × v. Also, using the corresponding formula from List 2 of Baksalary et al. (2003), we have:\n(KSi) †KSi = K † ∗K∗ − kk†\n=⇒ K†K− (KSi)†KSi = kk† (13)\nPutting the two parts together we obtain the bound on ∥∥(KSi)† −K†∥∥op:\nK† − (KSi)†op = K† −K†∗ + K†∗ − (KSi)†op ≤ 3K†∗op + K† −K†∗op ≤ 3K†op + 4(Kii(K†)ii)−1K†op + 4((K†)ii)−1K†2op ≤ K†op(3 + 8K†opKop)\n(14)\nThe last step follows from (Kii)−1 ≤ K†op and ((K†)ii)−1 ≤ Kop. Plugging in these calculations into equation 10 we get:\nf̂S − f̂Si H = K 1 2 [(K† − (KSi)†)y + (I−K†K)(v − vi)− (K†K− (KSi)†KSi)vi] ≤ K 12 op ( (K† − (KSi)†)y+ (I−K†K)(v − vi)+ kk†vi ) ≤ K 12 op(B0 + I−K†Kopv − vi+ vi)\n(15)\nWe see that the right hand side is minimized when v = vi = 0. We have also computed B0 = C × K†op × cond(K)× y, which concludes the proof of Lemma 11."
},
{
"heading": "5 REMARK AND RELATED WORK",
"text": "In the previous section we obtained bounds on the CVloo stability of interpolating solutions to the kernel least squares problem. Our kernel least squares results can be compared with stability bounds for regularized ERM (see Remark 3). Regularized ERM has a strong stability guarantee in terms of a uniform stability bound which turns out to be inversely proportional to the regularization parameter λ and the sample size n (Bousquet & Elisseeff, 2001). However, this estimate becomes vacuous as λ→ 0. In this paper, we establish a bound on average stability, and show that this bound is minimized when the minimum norm ERM solution is chosen. We study average stability since one can expect\n20 40 60 80 100 120 140\nn\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nC on\ndi tio\nn nu\nm be\nr\nd=nd>n d<n\nFigure 2: Typical double descent of the condition number (y axis) of a radial basis function kernel K(x, x′) = exp ( − x−x ′2\n2σ2\n) built from a random data matrix distributed asN (0, 1): as in the linear\ncase, the condition number is worse when n = d, better if n > d (on the right of n = d) and also better if n < d (on the left of n = d). The parameter σ was chosen to be 5. From Poggio et al. (2019)\nworst case scenarios where the minimum norm is arbitrarily large (when n ≈ d). One of our key findings is the relationship between minimizing the norm of the ERM solution and minimizing a bound on stability.\nThis leads to a second observation, namely, that we can consider the limit of our risk bounds as both the sample size (n) and the dimensionality of the data (d) go to infinity, but the ratio dn → γ > 1 as n, d→∞ . This is a classical setting in statistics which allows us to use results from random matrix theory (Marchenko & Pastur, 1967). In particular, for linear kernels the behavior of the smallest eigenvalue of the kernel matrix (which appears in our bounds) can be characterized in this asymptotic limit. In fact, under appropriate distributional assumptions, our bound for linear regression can be computed as (X† × y)2 ≈ √ n√\nd− √ n → 1√γ−1 . Here the dimension of the data coincides with\nthe number of parameters in the model. Interestingly, analogous results hold for more general kernels (inner product and RBF kernels) (El Karoui, 2010) where the asymptotics are taken with respect to the number and dimensionality of the data. These results predict a double descent curve for the condition number as found in practice, see Figure 2. While it may seem that our bounds in Theorem 7 diverge if d is held constant and n→∞, this case is not covered by our theorem, since when n > d we no longer have interpolating solutions.\nRecently, there has been a surge of interest in studying linear and kernel least squares models, since classical results focus on situations where constraints or penalties that prevent interpolation are added to the empirical risk. For example, high dimensional linear regression is considered in Mei & Montanari (2019); Hastie et al. (2019); Bartlett et al. (2019), and “ridgeless” kernel least squares is studied in Liang et al. (2019); Rakhlin & Zhai (2018) and Liang et al. (2020). While these papers study upper and lower bounds on the risk of interpolating solutions to the linear and kernel least squares problem, ours are the first to derived using stability arguments. While it might be possible to obtain tighter excess risk bounds through careful analysis of the minimum norm interpolant, our simple approach helps us establish a link between stability in statistical and in numerical sense.\nFinally, we can compare our results with observations made in Poggio et al. (2019) on the condition number of random kernel matrices. The condition number of the empirical kernel matrix is known to control the numerical stability of the solution to a kernel least squares problem. Our results show that the statistical stability is also controlled by the condition number of the kernel matrix, providing a natural link between numerical and statistical stability."
},
{
"heading": "6 CONCLUSIONS",
"text": "In summary, minimizing a bound on cross validation stability minimizes the expected error in both the classical and the modern regime of ERM. In the classical regime (d < n), CVloo stability implies generalization and consistency for n→∞. In the modern regime (d > n), as described in this paper, CVloo stability can account for the double descent curve in kernel interpolants (Belkin et al., 2019) under appropriate distributional assumptions. The main contribution of this paper is characterizing stability of interpolating solutions, in particular deriving excess risk bounds via a stability argument. In the process, we show that among all the interpolating solutions, the one with minimum norm also minimizes a bound on stability. Since the excess risk bounds of the minimum norm interpolant depend on the pseudoinverse of the kernel matrix, we establish here an elegant link between numerical and statistical stability. This also holds for solutions computed by gradient descent, since gradient descent converges to minimum norm solutions in the case of “linear” kernel methods. Our approach is simple and combines basic stability results with matrix inequalities."
},
{
"heading": "A EXCESS RISK, GENERALIZATION, AND STABILITY",
"text": "We use the same notation as introduced in Section 2 for the various quantities considered in this section. That is in the supervised learning setup V (f, z) is the loss incurred by hypothesis f on the sample z, and I[f ] = Ez[V (f, z)] is the expected error of hypothesis f . Since we are interested in different forms of stability, we will consider learning problems over the original training set S = {z1, z2, . . . , zn}, the leave one out training set Si = {z1, . . . , zi−1, zi+1, . . . , zn}, and the replace one training set (Si, z) = {z1, . . . , zi−1, zi+1, . . . , zn, z}\nA.1 REPLACE ONE AND LEAVE ONE OUT ALGORITHMIC STABILITY\nSimilar to the definition of expected CVloo stability in equation (3) of the main paper, we say an algorithm is cross validation replace one stable (in expectation), denoted as CVro, if there exists βro > 0 such that\nES,z[V (fS , z)− V (f(Si,z), z)] ≤ βro.\nWe can strengthen the above stability definition by introducing the notion of replace one algorithmic stability (in expectation) Bousquet & Elisseeff (2001). There exists αro > such that for all i = 1, . . . , n,\nES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ αro.\nWe make two observations: First, if the loss is Lipschitz, that is if there exists CV > 0 such that for all f, f ′ ∈ H\n‖V (f, z)− V (f ′, z)‖ ≤ CV ‖f − f ′‖ ,\nthen replace one algorithmic stability implies CVro stability with βro = CV αro. Moreover, the same result holds if the loss is locally Lipschitz and there exists R > 0, such that ‖fS‖∞ ≤ R almost surely. In this latter case the Lipschitz constant will depend on R. Later, we illustrate this situation for the square loss.\nSecond, we have for all i = 1, . . . , n, S and z, ES,z[ ∥∥fS − f(Si,z)∥∥∞] ≤ ES,z[‖fS − fSi‖∞] + ES,z[∥∥f(Si,z) − fSi∥∥∞].\nThis observation motivates the notion of leave one out algorithmic stability (in expectation) Bousquet & Elisseeff (2001)]\nES,z[‖fS − fSi‖∞] ≤ αloo.\nClearly, leave one out algorithmic stability implies replace one algorithmic stability with αro = 2αloo and it implies also CVro stability with βro = 2CV αloo.\nA.2 EXCESS RISK AND CVloo, CVro STABILITY\nWe recall the statement of Lemma 5 in section 3 that bounds the excess risk using the CVloo stability of a solution.\nLemma 13 (Excess Risk & CVloo Stability) For all i = 1, . . . , n,\nES [I[fSi ]− inf f∈H I[f ]] ≤ ES [V (fSi , zi)− V (fS , zi)]. (16)\nIn this section, two properties of ERM are useful, namely symmetry, and a form of unbiasedeness.\nSymmetry. A key property of ERM is that it is symmetric with respect to the data set S, meaning that it does not depend on the order of the data in S.\nA second property relates the expected ERM with the minimum of expected risk.\nERM Bias. The following inequality holds.\nE[[IS [fS ]]−min f∈H I[f ] ≤ 0. (17)\nTo see this, note that IS [fS ] ≤ IS [f ]\nfor all f ∈ H by definition of ERM, so that taking the expectation of both sides ES [IS [fS ]] ≤ ES [IS [f ]] = I[f ]\nfor all f ∈ H. This implies ES [IS [fS ]] ≤ min\nf∈H I[f ]\nand hence (17) holds.\nRemark 14 Note that the same argument gives more generally that\nE[ inf f∈H [IS [f ]]− inf f∈H I[f ] ≤ 0. (18)\nGiven the above premise, the proof of Lemma 5 is simple.\nProof [of Lemma 5] Adding and subtracting ES [IS [fS ]] from the expected excess risk we have that ES [I[fSi ]−min\nf∈H I[f ]] = ES [I[fSi ]− IS [fS ] + IS [fS ]−min f∈H I[f ]], (19)\nand since ES [IS [fS ]]−minf∈H I[f ]] is less or equal than zero, see (18), then ES [I[fSi ]−min\nf∈H I[f ]] ≤ ES [I[fSi ]− IS [fS ]]. (20)\nMoreover, for all i = 1, . . . , n\nES [I[fSi ]] = ES [EziV (fSi , zi)] = ES [V (fSi , zi)] and\nES [IS [fS ]] = 1\nn n∑ i=1 ES [V (fS , zi)] = ES [V (fS , zi)].\nPlugging these last two expressions in (20) and in (19) leads to (4).\nWe can prove a similar result relating excess risk with CVro stability.\nLemma 15 (Excess Risk & CVro Stability) Given the above definitions, the following inequality holds for all i = 1, . . . , n,\nES [I[fS ]− inf f∈H I[f ]] ≤ ES [I[fS ]− IS [fS ]] = ES,z[V (fS , z)− V (f(Si,z), z)]. (21)\nProof The first inequality is clear from adding and subtracting IS [fS ] from the expected risk I[fS ] we have that\nES [I[fS ]−min f∈H I[f ]] = ES [I[fS ]− IS [fS ] + IS [fS ]−min f∈H I[f ]],\nand recalling (18). The main step in the proof is showing that for all i = 1, . . . , n,\nE[IS [fS ]] = E[V (f(Si,z), z)] (22)\nto be compared with the trivial equality, E[IS [fS ] = E[V (fS , zi)]. To prove Equation (22), we have for all i = 1, . . . , n,\nES [IS [fS ]] = ES,z[ 1\nn n∑ i=1 V (fS , zi)] = 1 n n∑ i=1 ES,z[V (f(Si,z), z)] = ES,z[V (f(Si,z), z)]\nwhere we used the fact that by the symmetry of the algorithm ES,z[V (f(Si,z), z)] is the same for all i = 1, . . . , n. The proof is concluded noting that ES [I[fS ]] = ES,z[V (fS , z)].\nA.3 DISCUSSION ON STABILITY AND GENERALIZATION\nBelow we discuss some more aspects of stability and its connection to other quantities in statistical learning theory.\nRemark 16 (CVloo stability in expectation and in probability) In Mukherjee et al. (2006), CVloo stability is defined in probability, that is there exists βPCV > 0, 0 < δ P CV ≤ 1 such that\nPS{V (fSi , zi)− V (fS , zi) ≥ βPCV } ≤ δPCV .\nNote that the absolute value is not needed for ERM since almost positivity holds Mukherjee et al. (2006), that is V (fSi , zi)− V (fS , zi) > 0. Then CVloo stability in probability and in expectation are clearly related and indeed equivalent for bounded loss functions. CVloo stability in expectation (3) is what we study in the following sections.\nRemark 17 (Connection to uniform stability and other notions of stability) Uniform stability, introduced by Bousquet & Elisseeff (2001), corresponds in our notation to the assumption that there exists βu > 0 such that for all i = 1, . . . , n, supz∈Z V (fSi , z) − V (fS , z) ≤ βu. Clearly this is a strong notion implying most other definitions of stability. We note that there are number of different notions of stability. We refer the interested reader to Kutin & Niyogi (2002) , Mukherjee et al. (2006).\nRemark 18 (CVloo Stability & Learnability) A natural question is to which extent suitable notions of stability are not only sufficient but also necessary for controlling the excess risk of ERM. Classically, the latter is characterized in terms of a uniform version of the law of large numbers, which itself can be characterized in terms of suitable complexity measures of the hypothesis class. Uniform stability is too strong to characterize consistency while CVloo stability turns out to provide a suitably weak definition as shown in Mukherjee et al. (2006), see also Kutin & Niyogi (2002), Mukherjee et al. (2006). Indeed, a main result in Mukherjee et al. (2006) shows that CVloo stability is equivalent to consistency of ERM:\nTheorem 19 Mukherjee et al. (2006) For ERM and bounded loss functions, CVloo stability in probability with βPCV converging to zero for n→∞ is equivalent to consistency and generalization of ERM.\nRemark 20 (CVloo stability & insample/outofsample error) Let (S, z) = {z1, . . . , zn, z}, (z is a data point drawn according to the same distribution) and the corresponding ERM solution f(S,z), then (4) can be equivalently written as,\nES [I[fS ]− inf f∈F I[f ]] ≤ ES,z[V (fS , z)− V (f(S,z), z)].\nThus CVloo stability measures how much the loss changes when we test on a point that is present in the training set and absent from it. In this view, it can be seen as an average measure of the difference between insample and outofsample error.\nRemark 21 (CVloo stability and generalization) A common error measure is the (expected) generalization gap ES [I[fS ]−IS [fS ]]. For nonERM algorithms, CVloo stability by itself not sufficient to control this term, and further conditions are needed Mukherjee et al. (2006), since\nES [I[fS ]− IS [fS ]] = ES [I[fS ]− IS [fSi ]] + ES [IS [fSi ]− IS [fS ]].\nThe second term becomes for all i = 1, . . . , n,\nES [IS [fSi ]− IS [fS ]] = 1\nn n∑ i=1 ES [V (fSi , zi)− V (fS , zi)] = ES [V (fSi , zi)− V (fS , zi)]\nand hence is controlled by CV stability. The first term is called expected leave one out error in Mukherjee et al. (2006) and is controlled in ERM as n→∞, see Theorem 19 above.\nB CVloo STABILITY OF LINEAR REGRESSION\nWe have a dataset S = {(xi, yi)}ni=1 and we want to find a mapping w ∈ R d, that minimizes the empirical least squares risk. All interpolating solutions are of the form ŵS = y>X†+v>(I−XX†). Similarly, all interpolating solutions on the leave one out dataset Si can be written as ŵSi = y>i (Xi) † + v>i (I−Xi(Xi)†). Here X,Xi ∈ R d×n are the data matrices for the original and leave one out datasets respectively. We note that when v = 0 and vi = 0, we obtain the minimum norm interpolating solutions on the datasets S and Si.\nIn this section we want to estimate the CVloo stability of the minimum norm solution to the ERM problem in the linear regression case. This is the case outlined in Remark 9 of the main paper. In order to prove Remark 9, we only need to combine Lemma 10 with the linear regression analogue of Lemma 11. We state and prove that result in this section. This result predicts a double descent curve for the norm of the pseudoinverse as found in practice, see Figure 3.\nLemma 22 Let ŵS , ŵSi be any interpolating least squares solutions on the full and leave one out datasets S, Si, then ‖ŵS − ŵSi‖ ≤ BCV (X†,y,v,vi), and BCV is minimized when v = vi = 0, which corresponds to the minimum norm interpolating solutions w†S ,w † Si . Also, ∥∥∥w†S −w†Si∥∥∥ ≤ 3∥∥X†∥∥op × ‖y‖ (23) As mentioned before in section 2.1 of the main paper, linear regression can be viewed as a case of the kernel regression problem whereH = Rd, and the feature map Φ is the identity map. The inner product and norms considered in this case are also the usual Euclidean inner product and 2norm for vectors in Rd. The notation ‖·‖ denotes the Euclidean norm for vectors both in Rd and Rn. The usage of the norm should be clear from the context. Also, ‖A‖op is the left operator norm for a matrix A ∈ Rn×d, that is ‖A‖op = supy∈Rn,y=1 y>A.\nWe have n samples in the training set for a linear regression problem, {(xi, yi)}ni=1. We collect all the samples into a single matrix/vector X = [x1x2 . . .xn] ∈ Rd×n, and y = [y1y2 . . . yn]> ∈ Rn. Then any interpolating ERM solution wS satisfies the linear equation\nw>SX = y > (24)\nAny interpolating solution can be written as:\n(ŵS) > = y>X† + v>(I−XX†). (25)\nIf we consider the leave one out training set Si we can find the minimum norm ERM solution for Xi = [x1 . . .0 . . .xn] and yi = [y1 . . . 0 . . . yn]> as\n(ŵSi) > = y>i (Xi) † + v>i (I−Xi(Xi)†). (26) We can write Xi as:\nXi = X + ab > (27)\nwhere a ∈ Rd is a column vector representing the additive change to the ith column, i.e, a = −xi, and b ∈ Rn×1 is the i−th element of the canonical basis in Rn (all the coefficients are zero but the i−th which is one). Thus ab> is a d × n matrix composed of all zeros apart from the ith column which is equal to a.\nWe also have yi = y − yib. Now per Lemma 10 we are interested in bounding the quantity ŵSi − ŵS  = (ŵSi)> − (ŵS)>. This simplifies to:\nŵSi − ŵS  = y>i (Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)† = (y> − yib>)(Xi)† − y>X† + v>i − v> + v>XX† − v>i Xi(Xi)† = y>((Xi)† −X†) + yib>(Xi)† + v>i − v> + v>XX† − v>i Xi(Xi)† = y>((Xi)† −X†) + v>i − v> + v>XX† − v>i Xi(Xi)† = y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†) (28)\nIn the above equation we make use of the fact that b>(Xi)† = 0. We use an old formula (Meyer, 1973; Baksalary et al., 2003) to compute (Xi)† from X†. We use the development of pseudoinverses of perturbed matrices in Meyer (1973). We see that a = −xi is a vector in the column space of X and b is in the range space of XT (provided X has full column rank), with β = 1 + b>X†a = 1− b>X†xi = 0. This means we can use Theorem 6 in Meyer (1973) (equivalent to formula 2.1 in Baksalary et al. (2003)) to obtain the expression for (Xi)†\n(Xi) † = X† − kk†X† −X†h†h + (k†X†h†)kh (29)\nwhere k = X†a, and h = b>X†, and u† = u >\nu2 for any nonzero vector u.\n(Xi) † −X† = (k†X†h†)kh− kk†X† −X†h†h\n= a>(X†)>X†(X†)>b× kh k2h2 − kk†X† −X†h†h\n=⇒ (Xi)† −X†op ≤ a>(X†)>X†(X†)>b X†ab>X† + 2X†op\n≤ X †opX†ab>X† X†ab>X† + 2X†op\n= 3X†op\n(30)\nThe above set of inequalities follows from the fact that the operator norm of a rank 1 matrix is given by uv>op = u × v\nAlso, from List 2 of Baksalary et al. (2003) we have that Xi(Xi)† = XX† − h†h. Plugging in these calculations into equation 28 we get:\nŵSi − ŵS  = y>((Xi)† −X†) + (v>i − v>)(I−XX†)− v>i (XX† −Xi(Xi)†) ≤ B0 + I−XX†opv − vi+ vi × h†hop ≤ B0 + 2v − vi+ vi (31)\nWe see that the right hand side is minimized when v = vi = 0. We can also compute B0 = 3X†opy, which concludes the proof of Lemma 22."
}
]
 2,020
 
SP:b80bc890180934092cde037b49d94d6e4e06fad9
 [
"This paper presents a novel way of making full use of compact episodic memory to alleviate catastrophic forgetting in continual learning. This is done by adding the proposed discriminative representation loss to regularize the gradients produced by new samples. Authors gave insightful analysis on the influence of gradient diversity to the performance of continual learning, and proposed a regularization that connects metric learning and continual learning. However, there are still some issues to be addressed as below."
]
 The use of episodic memories in continual learning has been shown to be effective in terms of alleviating catastrophic forgetting. In recent studies, several gradientbased approaches have been developed to make more efficient use of compact episodic memories, which constrain the gradients resulting from new samples with those from memorized samples, aiming to reduce the diversity of gradients from different tasks. In this paper, we reveal the relation between diversity of gradients and discriminativeness of representations, demonstrating connections between Deep Metric Learning and continual learning. Based on these findings, we propose a simple yet highly efficient method – Discriminative Representation Loss (DRL) – for continual learning. In comparison with several stateoftheart methods, DRL shows effectiveness with low computational cost on multiple benchmark experiments in the setting of online continual learning.
 []
 [
{
"authors": [
"Rahaf Aljundi",
"Min Lin",
"Baptiste Goujaud",
"Yoshua Bengio"
],
"title": "Gradient based sample selection for online continual learning",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Arslan Chaudhry",
"Puneet K Dokania",
"Thalaiyasingam Ajanthan",
"Philip HS Torr"
],
"title": "Riemannian walk for incremental learning: Understanding forgetting and intransigence",
"venue": "In Proceedings of the European Conference on Computer Vision (ECCV),",
"year": 2018
},
{
"authors": [
"Arslan Chaudhry",
"Marc’Aurelio Ranzato",
"Marcus Rohrbach",
"Mohamed Elhoseiny"
],
"title": "Efficient lifelong learning with aGEM",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Arslan Chaudhry",
"Marcus Rohrbach",
"Mohamed Elhoseiny",
"Thalaiyasingam Ajanthan",
"Puneet K Dokania",
"Philip HS Torr",
"Marc’Aurelio Ranzato"
],
"title": "On tiny episodic memories in continual learning",
"venue": "arXiv preprint arXiv:1902.10486,",
"year": 2019
},
{
"authors": [
"Yu Chen",
"Tom Diethe",
"Neil Lawrence"
],
"title": "Facilitating bayesian continual learning by natural gradients and stein gradients",
"venue": "Continual Learning Workshop of 32nd Conference on Neural Information Processing Systems (NeurIPS",
"year": 2018
},
{
"authors": [
"Jiankang Deng",
"Jia Guo",
"Niannan Xue",
"Stefanos Zafeiriou"
],
"title": "Arcface: Additive angular margin loss for deep face recognition",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2019
},
{
"authors": [
"Tom Diethe",
"Tom Borchert",
"Eno Thereska",
"Borja de Balle Pigem",
"Neil Lawrence"
],
"title": "Continual learning in practice",
"venue": "In Continual Learning Workshop of 32nd Converence on Neural Information Processing Systems (NeurIPS",
"year": 2018
},
{
"authors": [
"Mehrdad Farajtabar",
"Navid Azizan",
"Alex Mott",
"Ang Li"
],
"title": "Orthogonal gradient descent for continual learning",
"venue": "In International Conference on Artificial Intelligence and Statistics,",
"year": 2020
},
{
"authors": [
"ChingYi Hung",
"ChengHao Tu",
"ChengEn Wu",
"ChienHung Chen",
"YiMing Chan",
"ChuSong Chen"
],
"title": "Compacting, picking and growing for unforgetting continual learning",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Mahmut Kaya",
"Hasan Şakir Bilge"
],
"title": "Deep metric learning: A survey. Symmetry",
"venue": null,
"year": 2019
},
{
"authors": [
"James Kirkpatrick",
"Razvan Pascanu",
"Neil Rabinowitz",
"Joel Veness",
"Guillaume Desjardins",
"Andrei A Rusu",
"Kieran Milan",
"John Quan",
"Tiago Ramalho",
"Agnieszka GrabskaBarwinska"
],
"title": "Overcoming catastrophic forgetting in neural networks",
"venue": "Proceedings of the national academy of sciences,",
"year": 2017
},
{
"authors": [
"Alex Krizhevsky",
"Geoffrey Hinton"
],
"title": "Learning multiple layers of features from tiny images",
"venue": "Technical report, Citeseer,",
"year": 2009
},
{
"authors": [
"Ya Le",
"Xuan Yang"
],
"title": "Tiny imagenet visual recognition challenge",
"venue": "CS 231N,",
"year": 2015
},
{
"authors": [
"Yann LeCun",
"Corinna Cortes",
"Christopher JC Burges"
],
"title": "MNIST handwritten digit database",
"venue": "AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,",
"year": 2010
},
{
"authors": [
"Timothée Lesort",
"Vincenzo Lomonaco",
"Andrei Stoian",
"Davide Maltoni",
"David Filliat",
"Natalia Dı́azRodrı́guez"
],
"title": "Continual learning for robotics",
"venue": "arXiv preprint arXiv:1907.00182,",
"year": 2019
},
{
"authors": [
"Jinlong Liu",
"Yunzhi Bai",
"Guoqing Jiang",
"Ting Chen",
"Huayan Wang"
],
"title": "Understanding why neural networks generalize well through GSNR of parameters",
"venue": "In International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"David LopezPaz",
"Marc’Aurelio Ranzato"
],
"title": "Gradient episodic memory for continual learning",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2017
},
{
"authors": [
"Michael McCloskey",
"Neal J Cohen"
],
"title": "Catastrophic interference in connectionist networks: The sequential learning problem",
"venue": "In Psychology of learning and motivation,",
"year": 1989
},
{
"authors": [
"Sebastian Mika",
"Gunnar Ratsch",
"Jason Weston",
"Bernhard Scholkopf",
"KlausRobert"
],
"title": "Mullers. Fisher discriminant analysis with kernels. In Neural networks for signal processing",
"venue": "IX: Proceedings of the 1999 IEEE signal processing society workshop (cat. no",
"year": 1999
},
{
"authors": [
"Cuong V Nguyen",
"Yingzhen Li",
"Thang D Bui",
"Richard E Turner"
],
"title": "Variational continual learning",
"venue": "In International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Matthew Riemer",
"Ignacio Cases",
"Robert Ajemian",
"Miao Liu",
"Irina Rish",
"Yuhai Tu",
"Gerald Tesauro"
],
"title": "Learning to learn without forgetting by maximizing transfer and minimizing interference",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"David Rolnick",
"Arun Ahuja",
"Jonathan Schwarz",
"Timothy Lillicrap",
"Gregory Wayne"
],
"title": "Experience replay for continual learning",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Karsten Roth",
"Timo Milbich",
"Samarth Sinha",
"Prateek Gupta",
"Bjoern Ommer",
"Joseph Paul Cohen"
],
"title": "Revisiting training strategies and generalization performance in deep metric learning",
"venue": "In Proceedings of the 37th International Conference on Machine Learning,",
"year": 2020
},
{
"authors": [
"Florian Schroff",
"Dmitry Kalenichenko",
"James Philbin"
],
"title": "Facenet: A unified embedding for face recognition and clustering",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,",
"year": 2015
},
{
"authors": [
"Jonathan Schwarz",
"Jelena Luketina",
"Wojciech M Czarnecki",
"Agnieszka GrabskaBarwinska",
"Yee Whye Teh",
"Razvan Pascanu",
"Raia Hadsell"
],
"title": "Progress & compress: A scalable framework for continual learning",
"venue": "arXiv preprint arXiv:1805.06370,",
"year": 2018
},
{
"authors": [
"Hanul Shin",
"Jung Kwon Lee",
"Jaehong Kim",
"Jiwon Kim"
],
"title": "Continual learning with deep generative replay",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2017
},
{
"authors": [
"Huangshi Tian",
"Minchen Yu",
"Wei Wang"
],
"title": "Continuum: A platform for costaware, lowlatency continual learning",
"venue": "In Proceedings of the ACM Symposium on Cloud Computing,",
"year": 2018
},
{
"authors": [
"Jian Wang",
"Feng Zhou",
"Shilei Wen",
"Xiao Liu",
"Yuanqing Lin"
],
"title": "Deep metric learning with angular loss",
"venue": "In Proceedings of the IEEE International Conference on Computer Vision,",
"year": 2017
},
{
"authors": [
"Xun Wang",
"Xintong Han",
"Weilin Huang",
"Dengke Dong",
"Matthew R Scott"
],
"title": "Multisimilarity loss with general pair weighting for deep metric learning",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2019
},
{
"authors": [
"Kilian Q Weinberger",
"John Blitzer",
"Lawrence K Saul"
],
"title": "Distance metric learning for large margin nearest neighbor classification",
"venue": "In Advances in neural information processing systems,",
"year": 2006
},
{
"authors": [
"ChaoYuan Wu",
"R Manmatha",
"Alexander J Smola",
"Philipp Krahenbuhl"
],
"title": "Sampling matters in deep embedding learning",
"venue": "In Proceedings of the IEEE International Conference on Computer Vision,",
"year": 2017
},
{
"authors": [
"Han Xiao",
"Kashif Rasul",
"Roland Vollgraf"
],
"title": "Fashionmnist: a novel image dataset for benchmarking machine learning algorithms, 2017",
"venue": null,
"year": 2017
},
{
"authors": [
"Friedemann Zenke",
"Ben Poole",
"Surya Ganguli"
],
"title": "Continual learning through synaptic intelligence",
"venue": "In International Conference on Machine Learning,",
"year": 2017
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "In the real world, we are often faced with situations where data distributions are changing over time, and we would like to update our models by new data in time, with bounded growth in system size. These situations fall under the umbrella of “continual learning”, which has many practical applications, such as recommender systems, retail supply chain optimization, and robotics (Lesort et al., 2019; Diethe et al., 2018; Tian et al., 2018). Comparisons have also been made with the way that humans are able to learn new tasks without forgetting previously learned ones, using common knowledge shared across different skills. The fundamental problem in continual learning is catastrophic forgetting (McCloskey & Cohen, 1989; Kirkpatrick et al., 2017), i.e. (neural network) models have a tendency to forget previously learned tasks while learning new ones.\nThere are three main categories of methods for alleviating forgetting in continual learning: i) regularizationbased methods which aim in preserving knowledge of models of previous tasks (Kirkpatrick et al., 2017; Zenke et al., 2017; Nguyen et al., 2018) ii) architecturebased methods for incrementally evolving the model by learning taskshared and taskspecific components (Schwarz et al., 2018; Hung et al., 2019); iii) replaybased methods which focus in preserving knowledge of data distributions of previous tasks, including methods of experience replay by episodic memories or generative models (Shin et al., 2017; Rolnick et al., 2019), methods for generating compact episodic memories (Chen et al., 2018; Aljundi et al., 2019), and methods for more efficiently using episodic memories (LopezPaz & Ranzato, 2017; Chaudhry et al., 2019a; Riemer et al., 2019; Farajtabar et al., 2020).\nGradientbased approaches using episodic memories, in particular, have been receiving increasing attention. The essential idea is to use gradients produced by samples from episodic memories to constrain the gradients produced by new samples, e.g. by ensuring the inner product of the pair of gradients is nonnegative (LopezPaz & Ranzato, 2017) as follows:\n〈gt, gk〉 = 〈 ∂L(xt, θ)\n∂θ , ∂L(xk, θ) ∂θ\n〉 ≥ 0, ∀k < t (1)\nwhere t and k are time indices, xt denotes a new sample from the current task, and xk denotes a sample from the episodic memory. Thus, the updates of parameters are forced to preserve the performance on previous tasks as much as possible.\nIn Gradient Episodic Memory (GEM) (LopezPaz & Ranzato, 2017), gt is projected to a direction that is closest to it in L2norm whilst also satisfying Eq. (1): ming̃ 12 gt − g̃ 2 2, s.t.〈g̃, gk〉 ≥ 0, ∀k < t. Optimization of this objective requires a highdimensional quadratic program and thus is computationally expensive. AveragedGEM (AGEM) (Chaudhry et al., 2019a) alleviates the computational burden of GEM by using the averaged gradient over a batch of samples instead of individual gradients of samples in the episodic memory. This not only simplifies the computation, but also obtains comparable performance with GEM. Orthogonal Gradient Descent (OGD) (Farajtabar et al., 2020) projects gt to the direction that is perpendicular to the surface formed by {gkk < t}. Moreover, Aljundi et al. (2019) propose Gradientbased Sample Selection (GSS), which selects samples that produce most diverse gradients with other samples into episodic memories. Here diversity is measured by the cosine similarity between gradients. Since the cosine similarity is computed using the inner product of two normalized gradients, GSS embodies the same principle as other gradientbased approaches with episodic memories. Although GSS suggests the samples with most diverse gradients are important for generalization across tasks, Chaudhry et al. (2019b) show that the average gradient over a small set of random samples may be able to obtain good generalization as well.\nIn this paper, we answer the following questions: i) Which samples tend to produce diverse gradients that strongly conflict with other samples and why are such samples able to help with generalization? ii) Why does a small set of randomly chosen samples also help with generalization? iii) Can we reduce the diversity of gradients in a more efficient way? Our answers reveal the relation between diversity of gradients and discriminativeness of representations, and further show connections between Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) and continual learning. Drawing on these findings we propose a new approach, Discriminative Representation Loss (DRL), for classification tasks in continual learning. Our methods show improved performance with relatively low computational cost in terms of time and RAM cost when compared to several stateoftheart (SOTA) methods across multiple benchmark tasks in the setting of online continual learning."
},
{
"heading": "2 A NEW PERSPECTIVE OF REDUCING DIVERSITY OF GRADIENTS",
"text": "According to Eq. (1), negative cosine similarities between gradients produced by current and previous tasks result in worse performance in continual learning. This can be interpreted from the perspective of constrained optimization as discussed by Aljundi et al. (2019). Moreover, the diversity of gradients relates to the Gradient Signal to Noise Ratio (GSNR) (Liu et al., 2020), which plays a crucial role in the model’s generalization ability. Intuitively, when more of the gradients point in diverse directions, the variance will be larger, leading to a smaller GSNR, which indicates that reducing the diversity of gradients can improve generalization. This finding leads to the conclusion that samples with the most diverse gradients contain the most critical information for generalization which is consistent with in Aljundi et al. (2019)."
},
{
"heading": "2.1 THE SOURCE OF GRADIENT DIVERSITY",
"text": "We first conducted a simple experiment on classification tasks of 2D Gaussian distributions, and tried to identify samples with most diverse gradients in the 2D feature space. We trained a linear model on the first task to discriminate between two classes (blue and orange dots in Fig. 1a). We then applied the algorithm Gradientbased Sample Selection with Interger Quadratic Programming (GSSIQP) (Aljundi et al., 2019) to select 10% of the samples of training data that produce gradients with the lowest similarity (black dots in Fig. 1a), and denote this set of samples as M̂ = minM ∑ i,j∈M 〈gi,gj〉 gi·gj  .\nIt is clear from Fig. 1a that the samples in M̂ are mostly around the decision boundary between the two classes. Increasing the size of M̂ results in the inclusion of samples that trace the outer edges of the data distributions from each class. Clearly the gradients can be strongly opposed when samples from different classes are very similar. Samples close to decision boundaries are most likely to exhibit this characteristic. Intuitively, storing the decision boundaries of previously learned classes should be an effective way to preserve classification performance on those classes. However, if the episodic memory only includes samples representing the learned boundaries, it may miss important information when the model is required to incrementally learn new classes. We show this by introducing a second task  training the model above on a third class (green dots). We display the decision boundaries (which split the feature space in a one vs. all manner) learned by the model after\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 M\n(a) Samples with most diverse gradients (M̂ ) after learning task 1, the green line is the decision boundary.\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(b) Learned decision boundaries (purple lines) after task 2. Here the episodic memory includes samples in M̂ .\n4 2 0 2 4 6 x\n4\n2\n0\n2\n4\n6\n8\ny\nclass 0 class 1 class 2 memory\n(c) Learned decision boundaries (purple lines) after task 2. Here the episodic memory consists of random samples.\n(a) Splitting samples into several subsets in a 3class classification task. Dots in different colors are from different classes.\n(b) Estimated distributions of β when drawing negative pairs from different subsets of samples.\n(c) Estimated distributions of α− δ when drawing negative pairs from different subsets of samples.\nFigure 2: Illustration of how Pr(2β > α − δ) in Theorem 1 behaves in various cases by drawing negative pairs from different subsets of a 3class feature space which are defined in Fig. 2a. The classifier is a linear model. yaxis in the right side of (b) & (c) is for the case of x ∈ S1 ∪ S2. We see that α− δ behaves in a similar way with β but in a smaller range which makes β the key in studying Pr(2β > α − δ). In the case of x ∈ S3 the distribution of β has more mass on larger values than other cases because the predicted probabilities are mostly on the two classes in a pair, and it causes all 〈gn,gm〉 having the opposite sign of 〈xn,xm〉 as shown in Tab. 1.\ntask 2 with M̂ (Fig. 1b) and a random set of samples (Fig. 1c) from task 1 as the episodic memory. The random episodic memory shows better performance than the one selected by GSSIQP, since the new decision boundaries rely on samples not included in M̂ . It explains why randomly selected memories may generalize better in continual learning. Ideally, with M̂ large enough, the model can remember all edges of each class, and hence learn much more accurate decision boundaries sequentially. However, memory size is often limited in practice, especially for highdimensional data. A more efficient way could be learning more informative representations. The experimental results indicate that: 1) more similar representations in different classes result in more diverse gradients. 2) more diverse representations in a same class help with learning new boundaries incrementally.\nNow we formalise the connection between the diversity of gradients and the discriminativeness of representations for the linear model (proofs are in Appx. A). Notations: Negative pair represents two samples from different classes. Positive pair represents two samples from a same class. Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a onehot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = W\nTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have: 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nTheorem 1. Suppose yn 6= ym, and let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , pn2 + pm2, β , pn,cm + pm,cn and δ , pn − pm22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β > α− δ),\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have: sign(〈gn, gm〉) = sign(〈xn,xm〉)\nFor a better understanding of the theorems, we conduct empirical study by partitioning the feature space of three classes into several subsets as shown in Fig. 2a and examine four cases of pairwise samples by these subsets: 1). x ∈ S0, both samples in a pair are near the intersection of the three classes; 2). x ∈ S0∪S1, one sample is close to decision boundaries and the other is far away from the boundaries; 3). x ∈ S3, both samples close to the decision boundary between their true classes but away from the third class; 4). x ∈ S1 ∪ S2, both samples are far away from the decision boundaries. Theorem 1 says that for samples from different classes, 〈gn, gm〉 gets an opposite sign of 〈xn,xm〉 with a probability that depends on the predictions pn and pm. This probability of flipping the sign especially depends on β which reflects how likely to misclassify both samples to its opposite class. We show the empirical distributions of β and (α− δ) obtained by a linear model in Figs. 2b and 2c, respectively. In general, (α− δ) shows similar behaviors with β in the four cases but in a smaller range, which makes 2β > (α − δ) tends to be true except when β is around zero. Basically, a subset including more samples close to decision boundaries leads to more probability mass on large values of β, and the case of x ∈ S3 results in largest mass on large values of β because the predicted probabilities mostly concentrate on the two classes in a pair. As shown in Tab. 1, more mass on large values of β leads to larger probabilities of flipping the sign. These results demonstrate that samples with most diverse gradients (which gradients have largely negative similarities with other samples) are close to decision boundaries because they tend to have large β and 〈xn,xm〉 tend to be positive. In the case of x ∈ S1 ∪ S2 the probability of flipping the sign is zero because β concentrates around zero. According to Lemma 1 〈gn, gm〉 are very close to zero in this case because the predictions are close to true labels, hence, such samples are not considered as with most diverse gradients.\nTheorem 2 says 〈gn, gm〉 has the same sign as 〈xn,xm〉 when the two samples from a same class. We can see the results of positive pairs in Tab. 1 matches Theorem 2. In the case of S0 ∪ S1 the two probabilities do not add up to exactly 1 because the implementation of crossentropy loss in tensorflow smooths the function by a small value for preventing numerical issues which slightly changes the gradients. As 〈xn,xm〉 is mostly positive for positive pairs, 〈gn, gm〉 hence is also mostly positive, which explains why samples with most diverse gradients are not sufficient to preserve information within classes in experiments of Fig. 1. On the other hand, if 〈xn,xm〉 is negative then 〈gn, gm〉 will be negative, which indicates representations within a class should not be too diverse. Extending this theoretical analysis based on a linear model, we also provide empirical study of nonlinear models (Multilayer Perceptrons (MLPs)). As demonstrated in Tab. 1, the probability of flipping the sign in MLPs are very similar with the linear model since it only depends on the predictions and all models have learned reasonable decision boundaries. The probability of getting\nnegative 〈gn, gm〉 is also similar with the linear model except in the case of S1 ∪ S2 for negative pairs, in which the MLP with ReLU gets much less negative 〈gn, gm〉. As MLP with tanh activations is still consistent with the linear model in this case, we consider the difference is caused by the representations always being positive due to ReLU activations. These results demonstrate that nonlinear models exhibit similar behaviors with linear models that mostly align with the theorems.\nSince only negative 〈gn, gm〉 may cause conflicts, reducing the diversity of gradients hence relies on reducing negative 〈gn, gm〉. We consider to reduce negative 〈gn, gm〉 by two ways: 1).minimize the representation inner product of negative pairs, which pushes the inner product to be negative or zero (for positive representations); 2).optimize the predictions to decrease the probability of flipping the sign. In this sense, decreasing the representation similarity of negative pairs might help with both ways. In addition, according to Fig. 2b x ∼ S3 gets larger prediction similarity than x ∼ S0 due to the predictions put most probability mass on both classes of a pair, which indicates decreasing the similarity of predictions may decrease the probability of flipping the sign. Hence, we include logits in the representations. We verify this idea by training two binary classifiers for two groups of MNIST classes ({0, 1} and {7, 9}). The classifiers have two hidden layers each with 100 hidden units and ReLU activations. We randomly chose 100 test samples from each group to compute the pairwise cosine similarities. Representations are obtained by concatenating the output of all layers (including logits) of the neural network, gradients are computed by all parameters of the model. We display the similarities in Figs. 3a and 3b. The correlation coefficients between the gradient and representation similarities of negative pairs are 0.86 and 0.85, which of positive pairs are 0.71 and 0.79. In all cases, the similarities of representations show strong correlations with the similarities of gradients. The classifier for class 0 and 1 gets smaller representation similarities and much less negative gradient similarities for negative pairs (blue dots) and it also gains a higher accuracy than the other classifier (99.95% vs. 96.25%), which illustrates the potential of reducing the gradient diversity by decreasing the representation similarity of negative pairs."
},
{
"heading": "2.2 CONNECTING DEEP METRIC LEARNING TO CONTINUAL LEARNING",
"text": "Reducing the representation similarity between classes shares the same concept as learning larger margins which has been an active research area for a few decades. For example, Kernel Fisher Discriminant analysis (KFD) (Mika et al., 1999) and distance metric learning (Weinberger et al., 2006) aim to learn kernels that can obtain larger margins in an implicit representation space, whereas Deep Metric Learning (DML) (Kaya & Bilge, 2019; Roth et al., 2020) leverages deep neural networks to learn embeddings that maximize margins in an explicit representation space. In this sense, DML has the potential to help with reducing the diversity of gradients in continual learning.\nHowever, the usual concepts in DML may not entirely be appropriate for continual learning, as they also aim in learning compact representations within classes (Schroff et al., 2015; Wang et al., 2017; Deng et al., 2019). In continual learning, the unused information for the current task might be important for a future task, e.g. in the experiments of Fig. 1 the ydimension is not useful for task 1 but useful for task 2. It indicates that learning compact representations in a current task might omit important dimensions in the representation space for a future task. In this case, even if we\nstore diverse samples into the memory, the learned representations may be difficult to generalize on future tasks as the omitted dimensions can only be relearned by using limited samples in the memory. We demonstrate this by training a model with and without L1 regulariztion on the first two tasks of splitMNIST and splitFashion MNIST. The results are shown in Tab. 2. We see that with L1 regularization the model learns much more compact representations and gives a similar performance on task 1 but much worse performance on task 2 comparing to without L1 regularization. The results suggest that continual learning shares the interests of maximizing margins in DML but prefers less compact representation space to preserve necessary information for future tasks. We suggest an opposite way regarding the withinclass compactness: minimizing the similarities within the same class for obtaining less compact representation space. Roth et al. (2020) proposed a ρspectrum metric to measure the information entropy contained in the representation space (details are provided in Appx. D) and introduced a ρregularization method to restrain overcompression of representations. The ρregularization method randomly replaces negative pairs by positive pairs with a preselected probability pρ. Nevertheless, switching pairs is inefficient and may be detrimental to the performance in an online setting because some negative pairs may never be learned in this way. Thus, we propose a different way to restrain the compression of representations which will be introduced in the following."
},
{
"heading": "3 DISCRIMINATIVE REPRESENTATION LOSS",
"text": "Based on our findings in the above section, we propose an auxiliary objective Discriminative Representation Loss (DRL) for classification tasks in continual learning, which is straightforward, robust, and efficient. Instead of explicitly reprojecting gradients during training process, DRL helps with decreasing gradient diversity by optimizing the representations. As defined in Eq. (2), DRL consists of two parts: one is for minimizing the similarities of representations from different classes (Lbt) which can reduce the diversity of gradients from different classes, the other is for minimizing the similarities of representations from a same class (Lwi) which helps preserve discriminative information for future tasks in continual learning.\nmin Θ LDRL = min Θ (Lbt + αLwi), α > 0,\nLbt = 1\nNbt B∑ i=1 B∑ j 6=i,yj 6=yi 〈hi, hj〉, Lwi = 1 Nwi B∑ i=1 B∑ j 6=i,yj=yi 〈hi, hj〉, (2)\nwhere Θ denotes the parameters of the model, B is training batch size. Nbt, Nwi are the number of negative and positive pairs, respectively. α is a hyperparameter controlling the strength of Lwi, hi is the representation of xi, yi is the label of xi. The final loss function combines the commonly used softmax cross entropy loss for classification tasks (L) with DRL (LDRL) as shown in Eq. (3),\nL̂ = L+ λLDRL, λ > 0, (3)\nwhere λ is a hyperparameter controlling the strength of LDRL, which is larger for increased resistance to forgetting, and smaller for greater elasticity. We provide experimental results to verify the effects of DRL and an ablation study on Lbt and Lwi (Tab. 7) in Appx. E, according to which Lbt and Lwi\nhave shown effectiveness on improving forgetting and ρspectrum, respectively. We will show the correlation between ρspectrum and the model performance in Sec. 5.\nThe computational complexity of DRL isO(B2H), whereB is training batch size,H is the dimension of representations. B is small (10 or 20 in our experiments) and commonly H W , where W is the number of network parameters. In comparison, the computational complexity of AGEM and GSSgreedy are O(BrW ) and O(BBmW ), respectively, where Br is the reference batch size in AGEM and Bm is the memory batch size in GSSgreedy. The computational complexity discussed here is additional to the cost of common backpropagation. We compare the training time of all methods on MNIST tasks in Tab. 9 in Appx. H, which shows the representationbased methods require much lower computational cost than gradientbased approaches."
},
{
"heading": "4 ONLINE MEMORY UPDATE AND BALANCED EXPERIENCE REPLAY",
"text": "We follow the online setting of continual learning as was done for other gradientbased approaches with episodic memories (LopezPaz & Ranzato, 2017; Chaudhry et al., 2019a; Aljundi et al., 2019), in which the model only trained with one epoch on the training data.\nWe update the episodic memories by the basic ring buffer strategy: keep the last nc samples of class c in the memory buffer, where nc is the memory size of a seen class c. We have deployed the episodic memories with a fixed size, implying a fixed budget for the memory cost. Further, we maintain a uniform distribution over all seen classes in the memory. The buffer may not be evenly allocated to each class before enough samples are acquired for newly arriving classes. We show pseudocode of the memory update strategy in Alg. 1 in Appx. B for a clearer explanation. For classincremental learning, this strategy can work without knowing task boundaries. Since DRL and methods of DML depend on the pairwise similarities of samples, we would prefer the training batch to include as wide a variety of different classes as possible to obtain sufficient discriminative information. Hence, we adjust the Experience Replay (ER) strategy (Chaudhry et al., 2019b) for the needs of such methods. The idea is to uniformly sample from seen classes in the memory buffer to form a training batch, so that this batch can contain as many seen classes as possible. Moreover, we ensure the training batch includes at least one positive pair of each selected class (minimum 2 samples in each class) to enable the parts computed by positive pairs in the loss. In addition, we also ensure the training batch includes at least one class from the current task. We call this Balanced Experience Replay (BER). The pseudo code is in Alg. 2 of Appx. B. Note that we update the memory and form the training batch based on the task ID instead of class ID for instanceincremental tasks (e.g. permuted MNIST tasks), as in this case each task always includes the same set of classes."
},
{
"heading": "5 EXPERIMENTS",
"text": "In this section we evaluate our methods on multiple benchmark tasks by comparing with several baseline methods in the setting of online continual learning.\nBenchmark tasks: We have conducted experiments on the following benchmark tasks: Permuted MNIST (10 tasks and each task includes the same 10 classes with different permutation of features), Split MNIST, Split FashionMNIST, and Split CIFAR10 (all three having 5 tasks with two classes in each task), Split CIFAR100 (10 tasks with 10 classes in each task), Split TinyImageNet (20 tasks with 10 classes in each task). All split tasks include disjoint classes. For tasks of MNIST (LeCun et al., 2010) and FashionMNIST (Xiao et al., 2017), the training size is 1000 samples per task, for CIFAR10 (Krizhevsky et al., 2009) the training size is 3000 per task, for CIFAR100 and TinyImageNet (Le & Yang, 2015) it is 5000 per task. N.B.: We use singlehead (shared output) models in all of our experiments, meaning that we do not require a task identifier at testing time. Such settings are more difficult for continual learning but more practical in real applications.\nBaselines: We compare our methods with: two gradientbased approaches (AGEM (Chaudhry et al., 2019a) and GSSgreedy (Aljundi et al., 2019)), two standalone experience replay methods (ER (Chaudhry et al., 2019b) and BER), two SOTA methods of DML (Multisimilarity (Wang et al., 2019) and RMargin (Roth et al., 2020)). We also trained a single task over all classes with one epoch for all benchmarks which performance can be viewed as a upper bound of each benchmark. N.B.: We deploy the losses of Multisimilarity and RMargin as auxiliary objectives as the same as DRL\nbecause using standalone such losses causes difficulties of convergence in our experimental settings. We provide the definitions of these two losses in Appx. D.\nPerformance measures: We use the Average accuracy, Average forgetting, Average intransigence to evaluate the performance of all methods, the definition of these measures are provided in Appx. C\nExperimental settings: We use the vanilla SGD optimizer for all experiments without any scheduling. For tasks on MNIST and FashionMNIST, we use a MLP with two hidden layers and ReLU activations, and each layer has 100 hidden units. For tasks on CIFAR datasets and TinyImageNet, we use the same reduced Resnet18 as used in Chaudhry et al. (2019a). All networks are trained from scratch without regularization scheme. For the MLP, representations are the concatenation of outputs of all layers including logits; for reduced Resnet18, representations are the concatenation of the input of the final linear layer and output logits. We concatenate outputs of all layers as we consider they behave like different levels of representation, and when higher layers (layers closer to the input) generate more discriminative representations it would be easier for lower layers to learn more discriminative representations as well. This method also improves the performance of MLPs. For reduced ResNet18 we found that including outputs of all hidden layers performs almost the same as only including the final representations, so we just use the final layer for lower computational cost. We deploy BER as the replay strategy for DRL, Multisimilarity, and RMargin. The memory size for tasks on MNIST and FashionMNIST is 300 samples. For tasks on CIFAR10 and CIFAR100 the memory size is 2000 and 5000 samples, respectively. For TinyImageNet it is also 5000 samples. The standard deviation shown in all results are evaluated over 10 runs with different random seeds. We use 10% of training set as validation set for choosing hyperparameters by cross validation. More details of experimental settings and hyperparameters are given in Appx. I.\nTabs. 3 to 5 give the averaged accuracy, forgetting, and intransigence of all methods on all benchmark tasks, respectively. As we can see, the forgetting and intransigence often conflict with each other which is the most common phenomenon in continual learning. Our method DRL is able to get a better tradeoff between them and thus outperforms other methods over most benchmark tasks in terms of average accuracy. This could be because DRL facilitates getting a good intransigence and ρspectrum by Lwi and a good forgetting by Lbt. In DRL the two terms are complementary to each other and combining them brings benefits on both sides (an ablation study on the two terms are provide in Appx. E). According to Tabs. 4 and 5, Multisimilarity got better avg. intransigence and similar avg. forgetting on CIFAR10 compared with DRL which indicates Multisimilarity learns better representations to generalize on new classes in this case. Roth et.al. (2020) also suggests Multisimilarity is a very strong baseline in deep metric learning which outperforms the proposed RMargin on several datasets. And we use the hyperparameters of Multisimilarity recommended in Roth et.al. (2020) which generally perform well on multiple complex datasets. TinyImageNet gets much worse performance than other benchmarks because it has more classes (200), a longer task sequence (20 tasks), a larger feature space (64× 64× 3), and the accuracy of the single task on it is just about 17.8%. According to Tab. 3 the longer task sequence, more classes, and larger feature space all increase the gap between the performance of the single task and continual learning.\nAs shown in Tab. 6 the rhospectrum shows high correlation to average accuracy on most benchmarks since it may help with learning new decision boundaries across tasks. Split MNIST has shown a low correlation between the ρspectrum and avg. accuracy due to the ρspectrum highly correlates with the avg. intransigence and consequently affect the avg. forgetting in an opposite direction so that causes a cancellation of effects on avg. accuracy. In addition, we found that GSS often obtains a smaller ρ than other methods without getting a better performance. In general, the ρspectrum is the smaller the better because it indicates the representations are more informative. However, it may be detrimental to the performance when ρ is too small as the learned representations are too noisy. DRL is more robust to this issue because ρ keeps relatively stable when α is larger than a certain value as shown in Fig. 4c in Appx. E."
},
{
"heading": "6 CONCLUSION",
"text": "The two fundamental problems of continual learning with small episodic memories are: (i) how to make the best use of episodic memories; and (ii) how to construct most representative episodic memories. Gradientbased approaches have shown that the diversity of gradients computed on data from different tasks is a key to generalization over these tasks. In this paper we demonstrate that the\nmost diverse gradients are from samples that are close to class boundaries. We formally connect the diversity of gradients to discriminativeness of representations, which leads to an alternative way to reduce the diversity of gradients in continual learning. We subsequently exploit ideas from DML for learning more discriminative representations, and furthermore identify the shared and different interests between continual learning and DML. In continual learning we would prefer larger margins between classes as the same as in DML. The difference is that continual learning requires less compact representations for better compatibility with future tasks. Based on these findings, we provide a simple yet efficient approach to solving the first problem listed above. Our findings also shed light on the second problem: it would be better for the memorized samples to preserve as much variance as possible. In most of our experiments, randomly chosen samples outperform those selected by gradient diversity (GSS) due to the limit on memory size in practice. It could be helpful to select memorized samples by separately considering the representativeness of inter and intraclass samples, i.e., those representing margins and edges. We will leave this for future work."
},
{
"heading": "A PROOF OF THEOREMS",
"text": "Notations: Let L represent the softmax cross entropy loss, W ∈ RD×K is the weight matrix of the linear model, and xn ∈ RD denotes the input data, yn ∈ RK is a onehot vector that denotes the label of xn, D is the dimension of representations, K is the number of classes. Let pn = softmax(on), where on = WTxn, the gradient gn = ∇WL(xn,yn;W). xn,xm are two different samples when n 6= m. Lemma 1. Let n = pn − yn, we have 〈gn, gm〉 = 〈xn,xm〉〈 n, m〉,\nProof. Let ` ′\nn = ∂L(xn,yn;W)/∂on, by the chain rule, we have:\n〈gn, gm〉 = 〈xn,xm〉〈` ′ n, ` ′ m〉,\nBy the definition of L, we can find:\n` ′\nn = pn − yn, (4)\nTheorem 1. Suppose yn 6= ym, let cn denote the class index of xn (i.e. yn,cn = 1,yn,i = 0,∀i 6= cn). Let α , pn2 + pm2, β , pn,cm + pm,cn and δ , pn − pm22, then:\nPr (sign(〈gn, gm〉) = sign(−〈xn,xm〉)) = Pr(2β + δ > α),\nProof. According to Lemma 1 and yn 6= ym, we have\n〈` ′ n, ` ′ m〉 = 〈pn,pm〉 − pn,cm − pm,cn\nAnd\n〈pn,pm〉 = 1\n2 (pn2 + pm2 − pn − pm2) =\n1 2 (α− δ)\nwhich gives 〈`′n, ` ′ m〉 = 12 (α− δ)− β. When 2β > α− δ, we must have 〈` ′ n, ` ′\nm〉 < 0. According to Lemma 1, we prove this theorem.\nTheorem 2. Suppose yn = ym, when 〈gn, gm〉 6= 0, we have:\nsign(〈gn, gm〉) = sign(〈xn,xm〉),\nProof. Because ∑K k=1 pn,k = 1, pn,k ≥ 0,∀k, and cn = cm = c,\n〈` ′ n, ` ′ m〉 = K∑ k 6=c pn,kpm,k + (pn,c − 1)(pm,c − 1) ≥ 0 (5)\nAccording to Lemma 1, we prove the theorem."
},
{
"heading": "B ALGORITHMS OF ONLINE MEMORY UPDATE",
"text": "We provide the details of online ring buffer update and Balanced Experience Replay (BER) in Algs. 1 to 3. We directly load new data batches into the memory buffer without a separate buffer for the current task. The memory buffer works like a sliding window for each class in the data stream and we draw training batches from the memory buffer instead of directly from the data stream. In this case, one sample may not be seen only once as long as it stays in the memory buffer. This strategy is a more efficient use of the memory when B < nc, where B is the loading batch size of the data stream (i.e. the number of new samples added into the memory buffer at each iteration), we set B to 1 in all experiments (see Appx. I for a discussion of this).\nAlgorithm 1 Ring Buffer Update with Fixed Buffer Size\nInput: Bt  current data batch of the data stream, Ct  the set of classes in Bt,M  memory buffer, C  the set of classes in M, K  memory buffer size. for c in Ct do\nGet Bt,c  samples of class c in Bt, Mc  samples of class c inM, if c in C then Mc =Mc ∪ Bc else Mc = Bc, C = C ∪ {c}\nend if end for R = M+ B −K while R > 0 do c′ = arg maxc Mc remove the first sample inMc′ , R = R−1 end while returnM\nAlgorithm 2 Balanced Experience Replay Input: M  memory buffer, C  the set of classes in M, B  training batch size, Θ  model parameters, LΘ  loss function, Bt  current data batch from the data stream, Ct  the set of classes in Bt, K  memory buffer size.\nM←MemoryUpdate(Bt, Ct,M, C,K) nc, Cs, Cr ← ClassSelection(Ct, C, B) Btrain = ∅ for c in Cs do\nif c in Cr then mc = nc + 1 else mc = nc end if GetMc  samples of class c inM, Bc\nmc∼ Mc C sample mc samples fromMc Btrain = Btrain ∪ Bc\nend for Θ← Optimizer(Btrain,Θ,LΘ)\nAlgorithm 3 Class Selection for BER Input: Ct  the set of classes in current data batch Bt, C  the set of classes inM, B  training batch size, mp  minimum number of positive pairs of each selected class (mp ∈ {0, 1}) . Btrain = ∅, nc = bB/Cc, rc = B mod C, if nc > 1 or mp == 0 then Cr\nrc∼ C C sample rc classes from all seen classes without replacement. Cs = C\nelse Cr = ∅, nc = 2, ns = bB/2c − Ct, C we ensure the training batch include samples from the current task. Cs\nns∼ (C − Ct) C sample ns classes from all seen classes except classes in Ct. Cs = Cs ⋃ Ct if B mod 2 > 0 then Cr\n1∼ Cs C sample one class in Cs to have an extra sample. end if\nend if Return: nc, Cs, Cr"
},
{
"heading": "C DEFINITION OF PERFORMANCE MEASURES",
"text": "We use the following measures to evaluate the performance of all methods: Average accuracy, which is evaluated after learning all tasks: āt = 1t ∑t i=1 at,i, where t is the index of the latest task, at,i is the accuracy of task i after learning task t.\nAverage forgetting (Chaudhry et al., 2018), which measures average accuracy drop of all tasks after learning the whole task sequence: f̄t = 1t−1 ∑t−1 i=1 maxj∈{i,...,t−1}(aj,i − at,i).\nAverage intransigence (Chaudhry et al., 2018), which measures the inability of a model learning new tasks: Īt = 1t ∑t i=1 a ∗ i − ai, where ai is the accuracy of task i at time i. We use the best accuracy among all compared models as a∗i instead of the accuracy obtained by an extra model that is solely trained on task i."
},
{
"heading": "D RELATED METHODS FROM DML",
"text": "ρspectrum metric (Roth et al., 2020): ρ = KL(USΦX ), which is proposed to measure the information entropy contained in the representation space. The ρspectrum computes the KLdivergence between a discrete uniform distribution U and the spectrum of data representations SΦX , where SΦX is normalized and sorted singular values of Φ(X ) , Φ denotes the representation extractor (e.g. a neural network) and X is input data samples. Lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nMultisimilarity(Wang et al., 2019): we adopt the loss function of Multisimilarity as an auxiliary objective in classfication tasks of continual learning, the batch mining process is omitted because we use labels for choosing positive and negative pairs. So the loss function is L̂ = L+ λLmulti, and:\nLmulti = 1\nB B∑ i=1 1 α log[1 + ∑\nj 6=i,yj=yi\nexp (−α(sc(hi, hj)− γ))]\n+ 1\nβ log [1 + ∑ yj 6=yi exp (β(sc(hi, hj)− γ))]\n (6)\nwhere sc(·, ·) is cosine similarity, α, β, γ are hyperparameters. In all of our experiments we set α = 2, β = 40, γ = 0.5 as the same as in Roth et al. (2020).\nRMargin(Roth et al., 2020): we similarly deploy RMargin for continual learning as the above, which uses the Margin loss (Wu et al., 2017) with the ρ regularization (Roth et al., 2020) as introduced in Sec. 2.2. So the loss function is L̂ = L+ λLmargin, and:\nLmargin = B∑ i=1 B∑ j=1 γ + Ij 6=i,yj=yi(d(hi, hj)− β)− Iyj 6=yi(d(hi, hj)− β) (7)\nwhere d(·, ·) is Euclidean distance, β is a trainable variable and γ is a hyperparameter. We follow the setting in Roth et al. (2020): γ = 0.2, the initialization of β is 0.6. We set pρ = 0.2 in ρ regularization."
},
{
"heading": "E ABLATION STUDY ON DRL",
"text": "We verify the effects of LDRL by training a model with/without LDRL on SplitMNIST tasks: Fig. 4a shows that LDRL notably reduces the similarities of representations from different classes while making representations from a same class less similar; Fig. 4b shows the analogous effect on gradients from different classes and a same class. Fig. 4c demonstrates increasing α can effectively decrease ρspectrum to a lowvalue level, where lower values of ρ indicate higher variance of the representations and hence more information entropy retained.\nTab. 7 provides the results of an ablation study on the effects of the two terms in DRL. In general, Lbt gets a better performance in terms of forgetting, Lwi gets a better performance in terms of intransigence and a lower ρspectrum, and both of them show improvements on BER (without any regularization terms). Overall, combining the two terms obtains a better performance on forgetting than standalone Lbt and keeps the advantage on intransigence that brought by Lwi. It indicates preventing overcompact representations while maximizing margins can improve the learned representations that are easier for generalization over previous and new tasks. In addition, we found that using standalone Lbt we can only use a smaller λ otherwise the gradients will explode, and using Lwi together can stablize the gradients. We notice that the lower ρspectrum does not necessarily lead to a higher accuracy as it’s correlation coefficients with accuracy depends on datasets and is usually larger than 1."
},
{
"heading": "F COMPARING DIFFERENT MEMORY SIZES",
"text": "Fig. 5 compares average accuracy of DRL+BER on MNIST tasks with different memory sizes. It appears the fixed memory size is more efficient than the incremental memory size. For example, the\n0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 pairwise similarity of representations\n0\n5\n10\n15\n20\npr ob\nab ilit\ny de\nns e\ndiff class sh diff class sDRh same class sh same class sDRh\n(a) Similarities of representations with and without LDRL\n−1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 pairwise similarity of gradients\n0\n1\n2\n3\n4\n5\n6\n7\n8\npr ob\nab ilit\ny de\nns e\ndiff class sg diff class sDRg same class sg same class sDRg\n(b) Similarities of gradients with and without LDRL\n0 2 4 6 8 10 α\n2\n3\n4\n5\n6\n7\nρ\n(c) Relation between α and ρspectrum.\nFigure 4: Effects of LDRL on reducing diveristy of gradients and ρspectrum. (a) and (b) display distributions of similarities of representations and gradients. sDRh and sh denote similarities of representations with and without LDRL, respectively, sDRg and sg denote similarities of gradients with and withoutLDRL, respectively. (c) demonstrates increasing α inLDRL can reduce ρ effectively.\nfixed memory size (M = 300) getting very similar average accuracy with memory M = 50 per class in Disjoint MNIST while it takes less cost of the memory after task 3. Meanwhile, the fixed memory size (M = 300) gets much better performance than M = 50 per task in most tasks of Permuted MNIST and it takes less cost of the memory after task 6. Since the setting of fixed memory size takes larger memory buffer in early tasks, the results indicate better generalization of early tasks can benefit later tasks, especially for more homogeneous tasks such as Permuted MNIST. The results also align with findings about Reservoir sampling (which also has fixed buffer size) in Chaudhry et al. (2019b) and we also believe a hybrid memory strategy can obtain better performance as suggested in Chaudhry et al. (2019b)."
},
{
"heading": "G COMPARING DIFFERENT REPLAY STRATEGY",
"text": "We compare DRL with different memory replay strategies in Tab. 8 to show DRL has general improvement based on the applied replay strategy."
},
{
"heading": "H COMPARING TRAINING TIME",
"text": "Tab. 9 compares the training time of MNIST tasks. All representationbased methods are much faster than gradientbased methods and close to the replaybased methods."
},
{
"heading": "I HYPERPARAMETERS IN EXPERIMENTS",
"text": "To make a fair comparison of all methods, we use following settings: i) The configurations of GSSgreedy are as suggested in Aljundi et al. (2019), with batch size set to 10 and each batch receives 10 iterations. ii) For the other methods, we use the ring buffer memory as described in Alg. 1, the loading batch size is set to 1, following with one iteration, the training batch size is provided in Tab. 10. More hyperparameters are given in Tab. 10 as well.\nIn the setting of limited training data in online continual learning, we either use a small batch size or iterate on one batch several times to obtain necessary steps for gradient optimization. We chose a small batch size with one iteration instead of larger batch size with multiple iterations because by our memory update strategy (Alg. 1) it achieves similar performance with fewer hyperparameters. Since GSSgreedy has a different strategy for updating memories, we leave it at its default settings.\nRegarding the two terms in DRL, a larger weight on Lwi is for less compact representations within classes, but a too dispersed representation space may include too much noise. For datasets that present more difficulty in learning compact representations, we would prefer a smaller weight on Lwi, we therefore set smaller α for CIFAR datasets in our experiments. A larger weight on Lbt is more resistant to forgetting but may be less capable of transferring to a new task, for datasets that are less compatible between tasks a smaller weight on Lbt would be preferred, as we set the largest λ on Permuted MNIST and the smallest λ on CIFAR100 in our experiments."
}
]
 2,020
 
SP:09f2fe6a482bbd6f9bd2c62aa841f995171ba939
 [
"This paper proposes a new framework that computes the taskspecific representations to modulate the model parameters during the multitask learning (MTL). This framework uses a single model with shared representations for learning multiple tasks together. Also, explicit task information may not be always available, in such cases the proposed framework is useful. The proposed framework is evaluated on various datasets spanning multiple modalities, where the MTL model even achieves stateoftheart results on some datasets. "
]
 Existing MultiTask Learning(MTL) strategies like joint or metalearning focus more on shared learning and have little to no scope for taskspecific learning. This creates the need for a distinct shared pretraining phase and a taskspecific finetuning phase. The finetuning phase creates separate models for each task, where improving the performance of a particular task necessitates forgetting some of the knowledge garnered in other tasks. Humans, on the other hand, perform taskspecific learning in synergy with general domainbased learning. Inspired by these learning patterns in humans, we suggest a simple yet generic task aware framework to incorporate into existing MTL strategies. The proposed framework computes taskspecific representations to modulate the model parameters during MTL. Hence, it performs both shared and taskspecific learning in a single phase resulting in a single model for all the tasks. The single model itself achieves significant performance gains over the existing MTL strategies. For example, we train a model on Speech Translation (ST), Automatic Speech Recognition (ASR), and Machine Translation (MT) tasks using the proposed task aware multitask learning approach. This single model achieves a performance of 28.64 BLEU score on ST MuSTC EnglishGerman, WER of 11.61 on ASR TEDLium v3, and BLEU score of 23.35 on MT WMT14 EnglishGerman tasks. This sets a new stateoftheart performance (SOTA) on the ST task while outperforming the existing endtoend ASR systems with a competitive performance on the MT task.
 []
 [
{
"authors": [
"Rosana Ardila",
"Megan Branson",
"Kelly Davis",
"Michael Henretty",
"Michael Kohler",
"Josh Meyer",
"Reuben Morais",
"Lindsay Saunders",
"Francis M. Tyers",
"Gregor Weber"
],
"title": "Common voice: A massivelymultilingual speech",
"venue": null,
"year": 2020
},
{
"authors": [
"Craig Atkinson",
"Brendan McCane",
"Lech Szymanski",
"Anthony V. Robins"
],
"title": "Pseudorecursal: Solving the catastrophic forgetting problem in deep neural networks",
"venue": "CoRR, abs/1802.03875,",
"year": 2018
},
{
"authors": [
"Dzmitry Bahdanau",
"Kyunghyun Cho",
"Yoshua Bengio"
],
"title": "Neural machine translation by jointly learning to align and translate",
"venue": "In Computer Science Mathematics CoRR,",
"year": 2015
},
{
"authors": [
"Rich Caruana"
],
"title": "Multitask learning",
"venue": "Machine learning,",
"year": 1997
},
{
"authors": [
"Brian Cheung",
"Alexander Terekhov",
"Yubei Chen",
"Pulkit Agrawal",
"Bruno Olshausen"
],
"title": "Superposition of many models into one",
"venue": "Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Ronan Collobert",
"Jason Weston"
],
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"venue": "In Proceedings of the 25th International Conference on Machine Learning,",
"year": 2008
},
{
"authors": [
"L. Deng",
"G. Hinton",
"B. Kingsbury"
],
"title": "New types of deep neural network learning for speech recognition and related applications: an overview",
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,",
"year": 2013
},
{
"authors": [
"Jacob Devlin",
"MingWei Chang",
"Kenton Lee",
"Kristina Toutanova. Bert"
],
"title": "Pretraining of deep bidirectional transformers for language understanding",
"venue": "arXiv preprint arXiv:1810.04805,",
"year": 2018
},
{
"authors": [
"Mattia A. Di Gangi",
"Roldano Cattoni",
"Luisa Bentivogli",
"Matteo Negri",
"Marco Turchi"
],
"title": "MuSTC: a Multilingual Speech Translation Corpus",
"venue": null,
"year": 2019
},
{
"authors": [
"Sergey Edunov",
"Myle Ott",
"Michael Auli",
"David Grangier"
],
"title": "Understanding backtranslation at scale",
"venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,",
"year": 2018
},
{
"authors": [
"Chelsea Finn",
"Pieter Abbeel",
"Sergey Levine"
],
"title": "Modelagnostic metalearning for fast adaptation of deep networks",
"venue": "In Proceedings of the 34th International Conference on Machine LearningVolume",
"year": 2017
},
{
"authors": [
"Chelsea Finn",
"Pieter Abbeel",
"Sergey Levine"
],
"title": "Modelagnostic metalearning for fast adaptation of deep networks",
"venue": "In Proceedings of the 34th International Conference on Machine LearningVolume",
"year": 2017
},
{
"authors": [
"R. Girshick"
],
"title": "Fast rcnn",
"venue": "IEEE International Conference on Computer Vision (ICCV),",
"year": 2015
},
{
"authors": [
"Jiatao Gu",
"Hany Hassan",
"Jacob Devlin",
"Victor O.K. Li"
],
"title": "Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
"venue": null,
"year": 2018
},
{
"authors": [
"Jiatao Gu",
"Yong Wang",
"Yun Chen",
"Victor O.K. Li",
"Kyunghyun Cho"
],
"title": "Metalearning for lowresource neural machine translation",
"venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,",
"year": 2018
},
{
"authors": [
"Kazuma Hashimoto",
"Caiming Xiong",
"Yoshimasa Tsuruoka",
"Richard Socher"
],
"title": "A joint manytask model: Growing a neural network for multiple NLP",
"venue": "tasks. CoRR,",
"year": 2016
},
{
"authors": [
"Tianxing He",
"Jun Liu",
"Kyunghyun Cho",
"Myle Ott",
"Bing Liu",
"James Glass",
"Fuchun Peng"
],
"title": "Analyzing the forgetting problem in the pretrainfinetuning of dialogue response",
"venue": null,
"year": 2020
},
{
"authors": [
"François Hernandez",
"Vincent Nguyen",
"Sahar Ghannay",
"Natalia Tomashenko",
"Yannick Estève"
],
"title": "Tedlium 3: Twice as much data and corpus repartition for experiments on speaker adaptation",
"venue": "Lecture Notes in Computer Science,",
"year": 2018
},
{
"authors": [
"S. Indurthi",
"H. Han",
"N.K. Lakumarapu",
"B. Lee",
"I. Chung",
"S. Kim",
"C. Kim"
],
"title": "Endend speechtotext translation with modality agnostic metalearning",
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),",
"year": 2020
},
{
"authors": [
"Sathish Reddy Indurthi",
"Insoo Chung",
"Sangha Kim"
],
"title": "Look harder: A neural machine translation model with hard attention",
"venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,",
"year": 2019
},
{
"authors": [
"Javier IranzoSánchez",
"Joan Albert SilvestreCerdà",
"Javier Jorge",
"Nahuel Roselló",
"Adrià Giménez",
"Albert Sanchis",
"Jorge Civera",
"Alfons Juan"
],
"title": "Europarlst: A multilingual corpus for speech translation of parliamentary debates",
"venue": null,
"year": 1911
},
{
"authors": [
"Nikhil Kumar Lakumarapu",
"Beomseok Lee",
"Sathish Reddy Indurthi",
"Hou Jeung Han",
"Mohd Abbas Zaidi",
"Sangha Kim"
],
"title": "Endtoend offline speech translation system for IWSLT 2020 using modality agnostic metalearning",
"venue": "In Proceedings of the 17th International Conference on Spoken Language Translation,",
"year": 2020
},
{
"authors": [
"Z. Li",
"D. Hoiem"
],
"title": "Learning without forgetting",
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,",
"year": 2018
},
{
"authors": [
"Zhizhong Li",
"Derek Hoiem"
],
"title": "Learning without forgetting",
"venue": "IEEE Trans. Pattern Anal. Mach. Intell.,",
"year": 2018
},
{
"authors": [
"Pierre Lison",
"Jörg Tiedemann",
"Milen Kouylekov"
],
"title": "Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In LREC 2018",
"venue": "Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA),",
"year": 2019
},
{
"authors": [
"Xiaodong Liu",
"Jianfeng Gao",
"Xiaodong He",
"Li Deng",
"Kevin Duh",
"Yeyi Wang"
],
"title": "Representation learning using multitask deep neural networks for semantic classification and information retrieval",
"venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,",
"year": 2015
},
{
"authors": [
"Xiaodong Liu",
"Kevin Duh",
"Liyuan Liu",
"Jianfeng Gao"
],
"title": "Very deep transformers for neural machine translation",
"venue": "arXiv preprint arXiv:2008.07772,",
"year": 2020
},
{
"authors": [
"Yuchen Liu",
"Hao Xiong",
"Zhongjun He",
"Jiajun Zhang",
"Hua Wu",
"Haifeng Wang",
"Chengqing Zong"
],
"title": "Endtoend speech translation with knowledge distillation",
"venue": "CoRR, abs/1904.08075,",
"year": 2019
},
{
"authors": [
"Thang Luong",
"Hieu Pham",
"Christopher D. Manning"
],
"title": "Effective approaches to attentionbased neural machine translation",
"venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,",
"year": 2015
},
{
"authors": [
"V. Panayotov",
"G. Chen",
"D. Povey",
"S. Khudanpur"
],
"title": "Librispeech: An asr corpus based on public domain audio books",
"venue": "In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),",
"year": 2015
},
{
"authors": [
"Ethan Perez",
"Florian Strub",
"Harm de Vries",
"Vincent Dumoulin",
"Aaron C. Courville"
],
"title": "Film: Visual reasoning with a general conditioning layer",
"venue": "In AAAI,",
"year": 2018
},
{
"authors": [
"NgocQuan Pham",
"ThaiSon Nguyen",
"Jan Niehues",
"Markus Müller",
"Alex Waibel"
],
"title": "Very deep selfattention networks for endtoend speech recognition",
"venue": "CoRR, abs/1904.13377,",
"year": 2019
},
{
"authors": [
"Juan Pino",
"Qiantong Xu",
"Xutai Ma",
"Mohammad Javad Dousti",
"Yun Tang"
],
"title": "Selftraining for endtoend speech translation",
"venue": "arXiv preprint arXiv:2006.02490,",
"year": 2020
},
{
"authors": [
"Matt Post"
],
"title": "A call for clarity in reporting BLEU scores",
"venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,",
"year": 2018
},
{
"authors": [
"Tomasz Potapczyk",
"Pawel Przybysz",
"Marcin Chochowski",
"Artur Szumaczuk"
],
"title": "Samsung’s system for the iwslt 2019 endtoend speech translation task",
"venue": "In 16th International Workshop on Spoken Language Translation (IWSLT). Zenodo,",
"year": 2019
},
{
"authors": [
"Colin Raffel",
"Noam Shazeer",
"Adam Roberts",
"Katherine Lee",
"Sharan Narang",
"Michael Matena",
"Yanqi Zhou",
"Wei Li",
"Peter J Liu"
],
"title": "Exploring the limits of transfer learning with a unified texttotext transformer",
"venue": "arXiv preprint arXiv:1910.10683,",
"year": 2019
},
{
"authors": [
"Bharath Ramsundar",
"Steven Kearnes",
"Patrick Riley",
"Dale Webster",
"David Konerding",
"Vijay Pande"
],
"title": "Massively multitask networks for drug",
"venue": null,
"year": 2015
},
{
"authors": [
"Rico Sennrich",
"Barry Haddow",
"Alexandra Birch"
],
"title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"venue": null,
"year": 2016
},
{
"authors": [
"Gjorgji Strezoski",
"Nanne van Noord",
"Marcel Worring"
],
"title": "Many task learning with task",
"venue": "routing. CoRR,",
"year": 2019
},
{
"authors": [
"Shubham Toshniwal",
"Tara N Sainath",
"Ron J Weiss",
"Bo Li",
"Pedro Moreno",
"Eugene Weinstein",
"Kanishka Rao"
],
"title": "Multilingual speech recognition with a single endtoend model",
"venue": "In ICASSP,",
"year": 2018
},
{
"authors": [
"Ashish Vaswani",
"Noam Shazeer",
"Niki Parmar",
"Jakob Uszkoreit",
"Llion Jones",
"Aidan N Gomez",
"Łukasz Kaiser",
"Illia Polosukhin"
],
"title": "Attention is all you need",
"venue": "In Advances in neural information processing systems,",
"year": 2017
},
{
"authors": [
"Lijun Wu",
"Yiren Wang",
"Yingce Xia",
"Fei Tian",
"Fei Gao",
"Tao Qin",
"Jianhuang Lai",
"TieYan Liu"
],
"title": "Depth growing for neural machine translation",
"venue": null,
"year": 1907
},
{
"authors": [
"Zhanpeng Zhang",
"Ping Luo",
"Chen Change Loy",
"Xiaoou Tang"
],
"title": "Facial landmark detection by deep multitask learning",
"venue": "Computer Vision – ECCV",
"year": 2014
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "The process of MultiTask Learning (MTL) on a set of related tasks is inspired by the patterns displayed by human learning. It involves a pretraining phase over all the tasks, followed by a finetuning phase. During pretraining, the model tries to grasp the shared knowledge of all the tasks involved, while in the finetuning phase, taskspecific learning is performed to improve the performance. However, as a result of the finetuning phase, the model forgets the information about the other tasks that it learnt during pretraining. Humans, on the other hand, are less susceptible to forgetfulness and retain existing knowledge/skills while mastering a new task. For example, a polyglot who masters a new language learns to translate from this language without losing the ability to translate other languages. Moreover, the lack of taskbased flexibility and having different finetuning/pretraining phases cause gaps in the learning process due to the following reasons:\nRole Mismatch: Consider the MTL system being trained to perform the Speech Translation(ST), Automatic Speech Recognition(ASR) and Machine Translation(MT) tasks. The Encoder block has a very different role in the standalone ASR, MT and ST models and hence we cannot expect a single encoder to perform well on all the tasks without any cues to identify/use task information. Moreover, there is a discrepancy between pretraining and finetuning hampering the MTL objective.\nTask Awareness: At each step in the MTL, the model tries to optimize over the task at hand. For tasks like ST and ASR with the same source language, it is impossible for the model to identify the task and alter its parameters accordingly, hence necessitating a finetuning phase. A few such examples have been provided in Table 1. Humans, on the other hand, grasp the task they have to perform by means of context or explicit cues.\nAlthough MTL strategies help the finetuned models to perform better than the models directly trained on those tasks, their applicability is limited to finding a good initialization point for the finetuning phase. Moreover, having a separate model for each task increases the memory requirements, which is detrimental in low resource settings.\nIn order to achieve the goal of jointly learning all the tasks, similar to humans, we need to perform shared learning in synergy with taskspecific learning. Previous approaches such as Raffel et al. (2019) trained a joint model for a set of related texttotext tasks by providing the task information along with the inputs during the joint learning phase. However, providing explicit task information is not always desirable, e.g., consider the automatic multilingual speech translation task. In order to ensure seamless user experience, it is expected that the model extracts the task information implicitly.\nThus, a holistic joint learning strategy requires a generic framework which learns taskspecific information without any explicit supervision.\nIn this work, we propose a generic framework which can be easily integrated into the MTL strategies which can extract taskbased characteristics. The proposed approach helps align existing MTL approaches with human learning processes by incorporating task information into the learning process and getting rid of the issues related to forgetfulness. We design a modulation network for learning the task characteristics and modulating the parameters of the model during MTL. As discussed above, the task information may or may not be explicitly available during the training. Hence, we propose two different designs of task modulation network to learn the task characteristics; one uses explicit task identities while the other uses the examples from the task as input. The model, coupled with the modulation network, jointly learns on all the tasks and at the same time, performs the taskspecific learning. The proposed approach tackles issues related to forgetfulness by keeping a single model for all the tasks, and hence avoiding the expensive finetuning phase. Having a single model for all the tasks also reduces memory constraints, improving suitability for low resource devices.\nTo evaluate the proposed framework, we conduct two sets of experiments. First, we include the task information during MTL on texttotext tasks to show the effect of task information. Secondly, we train a model on tasks with different modalities and end goals, with highly confounding tasks. Our proposed framework allows the model to learn the task characteristics without any explicit supervision, and hence train a single model which performs well on all the tasks. The main contributions of this work are as follows:\n• We propose an approach to tackle the issue of forgetfulness which occurs during the finetuning phase of existing MTL strategies.\n• Our model, without any finetuning, achieves superior performance on all the tasks which alleviates the need to keep separate taskspecific models.\n• Our proposed framework is generic enough to be used with any MTL strategy involving tasks with multiple modalities."
},
{
"heading": "2 TASKAWARE MULTITASK LEARNING",
"text": "An overview of our proposed approach is shown in Figure 1."
},
{
"heading": "2.1 BASE MODEL",
"text": "In general, the sequencetosequence architecture consists of two components: (1) an encoder which computes a set of representationsX = {x1, · · · ,xm} ∈ Rm×d corresponding to x, and a decoder coupled with attention mechanism (Bahdanau et al., 2015) dynamically reads encoder’s output and predicts target language sequence Y = {y1, · · · ,yn} ∈ Rn×d. It is trained on a dataset D to maximize the p (Y X; θ), where θ are parameters of the model. We use the Transformer Vaswani et al. (2017) as our base model. Based on the task modalities, we choose the preprocessing layer in the Transformer, i.e., speech or the text (textembedding) preprocessing layer. The speech preprocessing layer consists of a stack of k CNN layers with stride 2 for both time and frequency dimensions. This layer compresses the speech sequence and produces the output sequence such that input sequences corresponding to all the tasks have similar dimensions, d. The overview of the base sequencetosequence model is shown in the rightmost part of Figure 1."
},
{
"heading": "2.2 TASK MODULATION NETWORK",
"text": "The task modulation network performs two operations. In the first step, it computes the task characteristics (te) using the task characteristics layer. It then modulates the model parameters θ using te in the second step."
},
{
"heading": "2.2.1 TASK CHARACTERISTICS NETWORK:",
"text": "We propose two types of Task Characteristics Networks(TCN) to learn the task characteristics, where one uses explicit task identities while the other uses sourcetarget sequences as input.\nExplicit Task Information: In this approach, the tasks involved are represented using different task identities and fed as input to this TCN as one hot vectors. This network consists of a feedforward layer which produces the task embedding used for modulating the model parameters.\nte = FFN(e), (1)\nwhere e ∈ Rs is a onehot encoding of s tasks used during joint learning. Implicit Task Information: The Implicit TCN computes the task embeddings using example sequences from the tasks without any external supervision. It consists of four sublayers: (1) Sequence Representation Layer, (2) Bidirectional Attention Layer, (3) Sequence Summary Layer, and (4) Task Embedding Layer.\nThe sequence representation sublayer consists of unidirectional Transformer Encoder (TE) blocks Vaswani et al. (2017). It takes the source and target sequences from the tasks as input and produces\nselfattended source and target sequences.\nXsa = TE(X), Y sa = TE(Y ), (2)\nwhereXsa ∈ RM×d, Y sa ∈ RN×d. This sublayer computes the contextual representation of the sequences.\nThe Bidirectional Attention (BiA) sublayer takes the selfattended source and target sequences from the previous layer as input and computes the relation between them using DotProduct Attention Luong et al. (2015). As a result, we get target aware source (Xat ∈ RM×d) and source aware target (Y asRN×d) representations as outputs.\nXat = BiA(Xsa,Y sa), Y as = BiA(Y sa,Xsa). (3)\nThe sequence summary sublayer is similar to the sequence representation sub layer and summarizes the sequences. The sequence summaries are given by:\nXs = TEu(X at), Y s = TEu(Y as), (4)\nwhereXs ∈ RM×d, Y s ∈ RN×d. The Equation 4 summarizes the sequencesXat and Y as which contain the contextual and attention information. We take the last tokens from both the xs ∈ Rd and ys ∈ Rd, since the last token can see the whole sequence and acts as a summary of the sequence. The task embedding layer computes te by taking the outputs of the sequence summary sublayer and applying a feedforward network:\nte = FFN([x s : ys]). (5)"
},
{
"heading": "2.2.2 MODULATING MODEL PARAMETERS",
"text": "We modulate the parameters (θ) of the network (Section 2.1) to account for the taskspecific variation during MTL over a set of tasks. We achieve this by scaling (γ) and shifting (β) the outputs of each layer (e.g., transformer block) including any preprocessing layers in the model adopted based on the Featurewise Linear Modulation (FiLM; Perez et al. (2018)). The γ and β parameters are obtained from the task embedding te either by using equation 1 or 5.\nγ = te[: d], β = te[d :], (6)\nwhere te ∈ R2d, and d is the hidden dimension of the model. Once we have γ and β, we apply the featurewise linear modulation (Perez et al., 2018) to compute the modulated output (Ol) for each block of the model.\nOl = γ ∗ fl(vl; θl) + β, l = 1, · · · , L, (7) where L is the total number of blocks in the model and fl represents the lth block of the model with parameters θl ∈ θ and inputs vl."
},
{
"heading": "2.3 TRAINING",
"text": "MTL has been successfully applied across different applications of machine learning such as natural language processing (Hashimoto et al., 2016; Collobert & Weston, 2008), speech recognition (Liu et al., 2019; Deng et al., 2013), computer vision (Zhang et al., 2014; Liu et al., 2015; Girshick, 2015), and drug discovery (Ramsundar et al., 2015). It comes in many forms: joint learning, learning to learn, and learning with auxiliary tasks. We consider two MTL strategies: (1) joint learning and (2) learning to learn to train on set of S tasks, T = {τ1, · · · , τS} with corresponding datasets D = {D1, · · · , DS}. As our first training strategy, we use Joint Learning (JL) (Caruana, 1997), which is the most commonly used training strategy for MTL. In JL, the model parameters, including the output layer, are shared across all the tasks involved in the training. For the second training strategy under the learningtolearn approach, we use a variant of metalearning, Modality Agnostic Meta Learning (MAML) (Finn et al., 2017a). Even though MAML is mostly used in fewshot learning settings, we use it since it\nallows for taskspecific learning during the metatrain step and it has also been shown to provide improvements in the field of speech translation(Indurthi et al., 2020).\nWe resolve the sourcetarget vocabulary mismatch across different tasks in MTL by using a vocabulary of subwords (Sennrich et al., 2016) computed from all the tasks. We sample a batch of examples from Ds and use this as input to the TCN and the Transformer model. To ensure that each training example uses the task embedding computed using another example, we randomly shuffle this batch while using them as input to the TCN. This random shuffling improves the generalization performance by forcing the network to learn taskspecific characteristics (te) in Equation 1 or 5. We compute the task embedding in the metatrain step as well; however, the parameters of the TCN are updated only during the metatest step. During inference time, we use the precomputed task embeddings using a batch of examples randomly sampled from the training set."
},
{
"heading": "3 EXPERIMENTS",
"text": ""
},
{
"heading": "3.1 TASKS AND DATASETS",
"text": "We conduct two sets of experiments, one with the tasks having the same input modality, i.e., text and another over tasks having different input modalities, i.e., speech and text. The main motivation behind the textbased experiments is to establish the importance of providing task information in MTL. Our main experiments, containing different input modalities involve highly confusing tasks. These experiments help us demonstrate the effectiveness of our approach in a generic setup. We incorporate the proposed task modulation framework into joint and metalearning strategies and analyze its effects."
},
{
"heading": "3.1.1 SINGLE MODALITY EXPERIMENTS",
"text": "We perform the small scale texttotext machine translation task over three language pairs EnglishGerman/Romanian/Turkish (EnDe/Ro/Tr). We keep English as the source language, which makes it crucial to use task information and produce different outputs from the same input. Since it is easier to provide task identity through onehot vectors in text, we provide the task information by simply prepending the task identity to the source sequence of each task, e.g., ”translate from English to German”, ”translate from English to Turkish” similar to Raffel et al. (2019). We also train models using our proposed framework to learn the task information and shared knowledge jointly.\nFor EnDe, we use 1.9M training examples from the Europarl v7 dataset. Europarl dev2006 and News Commentary ncdev2007 are used as the dev and Europarl devtest2006, Europarl test2006 and News Commentary ncdevtest2007 as the test sets. For EnTr we train using 200k training examples from the setimes2 dataset. We use newsdev2016 as the dev and newstest2017 as the test set. For EnRo, we use 600k training examples from Europarl v8 and setimes2 datasets. We use newsdev2016 as dev and newstest2016 as the test set."
},
{
"heading": "3.1.2 MULTIPLE MODALITY EXPERIMENTS",
"text": "To alleviate the data scarcity issue in Speech Translation (ST), several MTL strategies have been proposed to jointly train the ST task with Automatic Speech Recognition (ASR) and Machine Translation (MT) tasks. These MTL approaches lead to significant performance gains on both ST and ASR tasks after the finetuning phase. We evaluate our proposed framework based on this multimodal MTL setting since passing the task information explicitly via prepending labels(like the texttotext case) in the source sequence is not possible. We use the following datasets for ST EnglishGerman, ASR English, MT EnglishGerman tasks:\nMT EnDe: We use the Open Subtitles (Lison et al., 2019) and WMT 19 corpora. WMT 19 consists of Common Crawl, Europarl v9, and News Commentary v14 datasets(22M training examples).\nASR English: We used five different datasets namely LibriSpeech (Panayotov et al., 2015), MuSTC (Di Gangi et al., 2019), TEDLIUM (Hernandez et al., 2018), Common Voice (Ardila et al., 2020) and filtered IWSLT 19 (IWS, 2019) to train the English ASR task.\nST Task: We use the Europarl ST (IranzoSánchez et al., 2019), IWSLT 2019 (IWS, 2019) and MuSTC (Di Gangi et al., 2019) datasets. Since ST task has lesser training examples, we use data augmentation techniques (Lakumarapu et al., 2020) to increase the number of training examples.\nPlease refer to the appendix for more details about the data statistics and data augmentation techniques used. All the models reported in this work use the same data settings for training and evaluation."
},
{
"heading": "3.2 IMPLEMENTATION DETAILS AND METRICS",
"text": "We implemented all the models using Tensorflow 2.2 framework. For all our experiments, we use the Transformer(Vaswani et al., 2017) as our base model. The hyperparameter settings such as learning rate, scheduler, optimization algorithm, and dropout have been kept similar to the Transformer, other than the ones explicitly stated to be different. The ASR performance is measured using Word Error Rate (WER) while ST and MT performances are calculated using the detokenized cased BLEU score (Post, 2018). We generate wordpiece based universal vocabulary (Gu et al., 2018a) of size 32k using source and target text sequences of all the tasks. For the task aware MTL strategies, we choose a single model to report the results rather than finding the best model for each task separately.\nWe train the texttotext translation models using 6 Encoder and Decoder layers with a batch size of 2048 text tokens. The training is performed using NVIDIA P40 GPU for 400k steps.\nIn multimodality experiments, the speech signals are represented using 80dimensional logMel features and use 3 CNN layers in the preprocessing layer described in Section 2.1. We use 12 Encoder and Decoder layers and train for 600k steps using 8 NVIDIA V100 GPUs. For the systems without TCN, we perform finetuning for 10k steps on each task."
},
{
"heading": "3.3 RESULTS",
"text": ""
},
{
"heading": "3.3.1 SINGLE MODALITY EXPERIMENTS",
"text": "The results for the texttotext translation models trained with different MTL strategies have been provided in Table 2. The MTL models with prepended task label (Raffel et al., 2019) are referred to as OHV (One Hot Vector). Unlike T5, we don’t initialize the models with the text embeddings from large pretrained language model (Devlin et al., 2018). Instead, we focus on establishing the importance of task information during MTL and having a single model for all the tasks. As we can see from the results, providing the task information via text labels or implicitly using the proposed task aware MTL leads to significant performance improvements compared to the MTL without the task information. The models trained using OHV have better performance than those trained using implicit TCN. However, providing OHV via text labels is not always possible for tasks involving nontext modalities such as speech and images."
},
{
"heading": "3.3.2 MULTI MODALITY EXPERIMENTS",
"text": "We evaluate the proposed two TCNs and compare them with the vanilla MTL strategies. The performance of all the models is reported in Table 3. We also extended the T5 (Raffel et al., 2019) approach to the multi modality experiments and compare it with our approach.\nEffect of Task Information: The models trained using task aware MTL achieve significant performance gains over the models trained using vanilla MTL approach. Our single model achieves superior performance compared to the vanilla MTL models even after the finetuning. This shows that not only is the task information essential to identify the task, but also helps to extract the shared knowledge better. Our JL and MAML models trained with task aware MTL achieve improvements of (+2.65, +2.52) for ST, (1.34, 1.18) for ASR, and (+0.72, +1.26) for the MT task. MAML has some scope for taskspecific learning during its meta train step, which explains why the improvements for MAML are slightly lesser than JL for ST and ASR tasks.\nWe also report results using Direct Learning (DL) approach, where separate models are trained for each task, to compare with MTL models. All the MTL models outperform the DL models on ST and ASR tasks have comparable performance on MT task.\nExplicit v/s Implicit TCN: Our proposed implicit TCN learns the task characteristics directly from the examples of each task and achieves a performance comparable to the models trained using explicit TCN. This indicates that it is better to learn the task information implicitly, specifically for tasks having overlapping characteristics. Figure 2 contains the tSNE plots for task embeddings obtained from the implicit TCN for single and multimodality experiments. We can observe that the implicit TCN is also able to separate all the three tasks effectively without any external supervision.\nSingle model for all tasks: We select one single model for reporting the results for our approach, since, having a single model for multiple tasks is favourable in low resource settings. However, we also report the best models corresponding to each task (row 8 and 11 of Table 3). We observe that choosing a single model over taskspecific models did not result in any significant performance loss.\nFeaturewise v/s Input based modulation: We also implemented the input based conditioning (Toshniwal et al., 2018; Raffel et al., 2019) where we prepend the TCN output, i.e., task information to the source and target sequences. As compared to our approach, this approach provides a comparable performance on the ASR task. However, the ST performance is erratic and the output is mixed between ST and ASR tasks. This shows that the featurewise modulation is more efficient way to carry out taskbased conditioning for highly confusing tasks like ST and ASR.\nNumber of parameters added: The Explicit TCN, which is a dense layer, roughly 1500 new parameters are added. For the Implicit TCN, roughly 1 million new additional parameters are added. However, simply increasing the number of parameters is not sufficient to to improve the performance. For e.g., we trained several models by increasing the number of layers for encoder and decoder upto 16. However, these models gave inferior performance as compared to the reported models with 12 encoder and decoder layers.\nScaling with large number of tasks: The tsne plots in the Figure 2b are drawn using the three test datasets. However, we used multiple datasets for each of the ASR(Librispeech, Common voice, TEDLIUM, MuSTCASR), ST (MuSTC, IWSLT20, Europarl), and MT (WMT19, OpenSubtitles) tasks in the multimodality experiments. We analyze whether or not our proposed approach is able to separate the data coming from these different distributions. As compared to data coming from different tasks, separating the data coming from the same task(generated from different distributions) is more difficult. Earlier, in Figure 2b, we observed that the output is clustered based on the tasks. Figure 2c shows that within these taskbased clusters, there are subclusters based on the source dataset. Hence, the model is able to identify each subtask based on the source dataset. The model also gives decent performances on all of them. For example, the single model achieves a WER of 7.5 on the Librispeech tstclean, 10.35 on MuSTC, 11.65 on the TEDLIUM v3 and 20.36 on the commonvoice test set. For the ST task, the same model gives a BLEU score of 28.64 on the MuSTC test set, 27.61 on the IWSLT tst2010, and 27.57 on the Europarl test set. This shows that our proposed approach scales well with the total number of tasks.\nComparison with existing works: The design of our system, i.e., the parameters and the related tasks were fixed keeping the ST task in mind. We compare the results of our best systems(after checkpoint averaging) with the recent works in Table 4. We set a new stateoftheart (SOTA) on the\nST EnDe MuSTC task. For the ASR task, we outperform the very deep Transformer based model Pham et al. (2019). We achieve a 19.2% improvement in the WER as compared to the model with the same number of Encoder and Decoder blocks. The best transformerbased MT model achieves a BLEU score of 30.10, however, it uses 60 Encoder blocks. The performance drop on the MT task is attributed to simply training a bigger model without using any additional initialization techniques proposed in Liu et al. (2015); Wu et al. (2019). However, the MT task helps the other tasks and improves the overall performance of the system."
},
{
"heading": "4 RELATED WORK",
"text": "Various MTL techniques have been widely used to improve the performance of endtoend neural networks. These techniques are known to solve issues like overfitting and data scarcity. Joint learning (Caruana, 1997) improves the generalization by leveraging the shared information contained in the training signals of related tasks. MAML (Finn et al., 2017b) was proposed for training a joint model on a variety of tasks, such that it can quickly adapt to new tasks. Both the learning approaches require a finetuning phase resulting in different models for each task. Moreover, during finetuning phase the model substantially forgets the knowledge acquired during the largescale pretraining.\nOne of the original solutions to this problem is pseudorehearsal, which involves learning the new task while rehearsing generated items representative of the previous task. This has been investigated and addressed to a certain extent in Atkinson et al. (2018) and Li & Hoiem (2018). He et al. (2020) address this by using a mixreview finetuning strategy, where they include the pretraining objective during the finetuning phase. Raffel et al. (2019) take a different approach by providing the task information to the model and achieve performance improvements on different texttotext tasks. Although this alleviates the need for finetuning, it cannot be extended to the tasks involving complex modalities. In our work, we propose a generic framework on top of MTL to provide task information to the model which can be applied irrespective of the task modalities. It also removes the need for finetuning, tackling the issue of forgetfulness at its root cause.\nA few approaches have also tried to train multiple tasks with a single model, Cheung et al. (2019) project the input to orthogonal subspaces based on the task information. In the approach proposed by Li & Hoiem (2018), the model is trained on various image classification tasks having the same input modality. They preserve the output of the model on the training example such that the parameters don’t deviate much from the original tasks. This is useful when the tasks share the same goal, e.g. classification. However, we train on a much more varied set of tasks, which might also have the same inputs with different end goals. Strezoski et al. (2019) propose to apply a fixed mask based on the task identity. Our work can be seen as a generalization of this work. As compared to all these approaches, our model is capable of performing both task identification and the corresponding task learning simultaneously. It learns to control the interactions among various tasks based on the intertask similarity without any explicit supervision.\nIn the domain of neural machine translation, several MTL approaches have been proposed (Gu et al., 2018a;b). Similarly, recent works have shown that jointly training ST, ASR, and MT tasks improved the overall performance (Liu et al., 2019; Indurthi et al., 2020). However, all these require a separate finetuning phase."
},
{
"heading": "5 CONCLUSION",
"text": "This work proposes a taskaware framework which helps to improve the learning ability of the existing multitask learning strategies. It addresses the issues faced during vanilla multitask learning, which includes forgetfulness during finetuning and the problems associated with having separate models for each task. The proposed approach helps to align better the existing multitask learning strategies with human learning. It achieves significant performance improvements with a single model on a variety of tasks which is favourable in low resource settings."
},
{
"heading": "6 APPENDIX",
"text": ""
},
{
"heading": "6.1 DATASETS",
"text": ""
},
{
"heading": "6.1.1 DATA AUGMENTATION FOR SPEECH TRANSLATION",
"text": "Table 5 provides details about the datasets used for the multimodality experiments. Since EnDe ST task has relatively fewer training examples compared to ASR and MT tasks, we augment the ST dataset with synthetic training examples. We generate the synthetic speech sequence and pair it with the synthetic German text sequences. obtained by using the top two beam search results of the two trained EnglishtoGerman NMT models. For speech sequence, we use the Sox library to generate the speech signal using different values of speed, echo, and tempo parameters similar to (Potapczyk et al., 2019). The parameter values are uniformly sampled using these ranges : tempo ∈ (0.85, 1.3), speed ∈ (0.95, 1.05), echo delay ∈ (20, 200), and echo decay ∈ (0.05, 0.2). We also train two NMT models on ENDe language pair to generate synthetic German sequence. The first model is based on Edunov et al. (2018) and the second model (Indurthi et al., 2019) is trained on WMT’18 EnDe and OpenSubtitles datsets. We increase the size of the IWSLT 19(filtered) ST dataset to five times of the original size by augmenting 4x data – four text sequences using the top two beam results from each ENDe NMT model and four speech signals using the Sox parameter ranges. For the EuroparlST, we augment 2x examples to triple the size. The TEDLIUM 3 dataset does not contain speechtotext translation examples originally; hence, we create 2x synthetic speechtotext translations using speechtotext transcripts. Finally, for the MuSTC dataset, we only create synthetic speech and pair it with the original translation to increase the dataset size to 4x. The Overall, we created the synthetic training data of size approximately equal to four times of the original data for the ST task."
},
{
"heading": "6.1.2 TASK IDENTIFICATION WITHOUT TASK INFORMATION",
"text": "Under the multimodality setting, we conducted smaller scale experiments using only one dataset for each ST, ASR, and ST tasks. The details of the datasets used have been provided in Table 7. We trained on single p40 GPU for 400k steps. The corresponding results have been reported in Table 6. All the results have been obtained without any finetuning. Even though our taskaware MTL model achieves significant performance improvement over vanilla MTL models, we can observe that the vanilla MTL models are also able to give a decent performance on all tasks without any finetuning. An explanation for this is that we used MuSTC dataset for the EnDe ST task and TEDLium v3 for the ASR task, which means that the source speech is coming from 2 different sources. However, if we use the same datasets for both the tasks(after data augmentation), the MTL models get confused and the ST, ASR outputs are mixed. The MTL models might be able to learn the task identities simply based on the source speech sequences, since these sequence are coming from different datasets for each task type–MuSTC for ST and TEDLIUM v3 for ASR. However, this does not mean that vanilla MTL models perform joint learning effectively. A human who can perform multiple tasks from the same input is aware of the task he has to perform beforehand. Similarly, it is unreasonable to expect different outputs (translation, transcription) from a model to the same type of input (English speech) without any explicit task information."
},
{
"heading": "6.1.3 IMPLEMENTATION DETAILS",
"text": "The detailed hyperparameters settings used for the single modality and multi modality experiments have been provided in the Table 8.\nS No. MTL Strategy MT BLEU (↑) ASR(WER (↓) ST(BLEU (↑)Test Dev Test Dev Test 1 Joint Learning 14.77 29.56 30.87 13.10 12.70 2 Meta Learning 14.74 28.58 29.92 13.89 13.67\nThis Work"
}
]
 2,020
 
SP:a1e2218e6943bf138aeb359e23628676b396ed66
 [
"This work proposes a deep reinforcement learningbased optimization strategy to the fuel optimization problem for the hybrid electric vehicle. The problem has been formulated as a fully observed stochastic Markov Decision Process (MDP). A deep neural network is used to parameterize the policy and value function. A continuous time representation of the problem is also used compared to conventional techniques which mostly use a discrete time formulation. "
]
 This paper deals with the fuel optimization problem for hybrid electric vehicles in reinforcement learning framework. Firstly, considering the hybrid electric vehicle as a completely observable nonlinear system with uncertain dynamics, we solve an openloop deterministic optimization problem to determine a nominal optimal state. This is followed by the design of a deep reinforcement learning based optimal controller for the nonlinear system using concurrent learning based system identifier such that the actual states and the control policy are able to track the optimal state and optimal policy, autonomously even in the presence of external disturbances, modeling errors, uncertainties and noise and signigicantly reducing the computational complexity at the same time, which is in sharp contrast to the conventional methods like PID and Model Predictive Control (MPC) as well as traditional RL approaches like ADP, DDP and DQN that mostly depend on a set of predefined rules and provide suboptimal solutions under similar conditions. The low value of the Hinfinity (H∞) performance index of the proposed optimization algorithm addresses the robustness issue. The optimization technique thus proposed is compared with the traditional fuel optimization strategies for hybrid electric vehicles to illustate the efficacy of the proposed method.
 []
 [
{
"authors": [
"R. Akrour",
"A. Abdolmaleki",
"H. Abdulsamad",
"G. Neumann"
],
"title": "Model Free Trajectory Optimization for Reinforcement Learning",
"venue": "In Proceedings of the International Conference on Machine Learning (ICML),",
"year": 2016
},
{
"authors": [
"A. Barto",
"R. Sutton",
"C. Anderson"
],
"title": "Neuronlike adaptive elements that can solve difficult learning control problems",
"venue": "IEEE Transaction on Systems, Man, and Cybernetics,",
"year": 1983
},
{
"authors": [
"R. Bellman"
],
"title": "The theory of dynamic programming",
"venue": "DTIC Document, Technical Representations",
"year": 1954
},
{
"authors": [
"D. Bertsekas"
],
"title": "Dynamic Programming and Optimal Control",
"venue": "Athena Scientific,",
"year": 2007
},
{
"authors": [
"S. Bhasin",
"R. Kamalapurkar",
"M. Johnson",
"K. Vamvoudakis",
"F.L. Lewis",
"W. Dixon"
],
"title": "A novel actorcriticidentifier architecture for approximate optimal control of uncertain nonlinear systems",
"venue": null,
"year": 2013
},
{
"authors": [
"R.P. Bithmead",
"V. Wertz",
"M. Gerers"
],
"title": "Adaptive Optimal Control: The Thinking Man’s G.P.C",
"venue": "Prentice Hall Professional Technical Reference,",
"year": 1991
},
{
"authors": [
"A. Bryson",
"H.Y.C"
],
"title": "Applied Optimal Control: Optimization, Estimation and Control. Washington: Hemisphere",
"venue": "Publication Corporation,",
"year": 1975
},
{
"authors": [
"G.V. Chowdhary",
"E.N. Johnson"
],
"title": "Theory and flighttest validation of a concurrentlearning adaptive controller",
"venue": "Journal of Guidance Control and Dynamics,34:,",
"year": 2011
},
{
"authors": [
"G. Chowdhary",
"T. Yucelen",
"M. Mühlegg",
"E.N. Johnson"
],
"title": "Concurrent learning adaptive control of linear systems with exponentially convergent bounds",
"venue": "International Journal of Adaptive Control and Signal Processing,",
"year": 2013
},
{
"authors": [
"P. Garcı́a",
"J.P. Torreglosa",
"L.M. Fernández",
"F. Jurado"
],
"title": "Viability study of a FCbatterySC tramway controlled by equivalent consumption minimization strategy",
"venue": "International Journal of Hydrogen Energy,",
"year": 2012
},
{
"authors": [
"A. Gosavi"
],
"title": "Simulationbased optimization: Parametric optimization techniques and reinforcement learning",
"venue": null,
"year": 2003
},
{
"authors": [
"J. Han",
"Y. Park"
],
"title": "A novel updating method of equivalent factor in ECMS for prolonging the lifetime of battery in fuel cell hybrid electric vehicle",
"venue": "In IFAC Proceedings,",
"year": 2012
},
{
"authors": [
"J. Han",
"J.F. Charpentier",
"T. Tang"
],
"title": "An Energy Management System of a Fuel Cell/Battery",
"venue": "Hybrid Boat. Energies,",
"year": 2014
},
{
"authors": [
"J. Han",
"Y. Park",
"D. Kum"
],
"title": "Optimal adaptation of equivalent factor of equivalent consumption minimization strategy for fuel cell hybrid electric vehicles under active state inequality constraints",
"venue": "Journal of Power Sources,",
"year": 2014
},
{
"authors": [
"R. Kamalapurkar",
"L. Andrews",
"P. Walters",
"W.E. Dixon"
],
"title": "Modelbased reinforcement learning for infinitehorizon approximate optimal tracking",
"venue": "In Proceedings of the IEEE Conference on Decision and Control (CDC),",
"year": 2014
},
{
"authors": [
"R. Kamalapurkar",
"H. Dinh",
"S. Bhasin",
"W.E. Dixon"
],
"title": "Approximate optimal trajectory tracking for continuoustime nonlinear systems",
"venue": null,
"year": 2015
},
{
"authors": [
"S.G. Khan"
],
"title": "Reinforcement learning and optimal adaptive control: An overview and implementation examples",
"venue": "Annual Reviews in Control,",
"year": 2012
},
{
"authors": [
"M.J. Kim",
"H. Peng"
],
"title": "Power management and design optimization of fuel cell/battery hybrid vehicles",
"venue": "Journal of Power Sources,",
"year": 2007
},
{
"authors": [
"D. Kirk"
],
"title": "Optimal Control Theory: An Introduction",
"venue": "Mineola, NY,",
"year": 2004
},
{
"authors": [
"V. Konda",
"J. Tsitsiklis"
],
"title": "On actorcritic algorithms",
"venue": "SIAM Journal on Control and Optimization,",
"year": 2004
},
{
"authors": [
"S. Levine",
"P. Abbeel"
],
"title": "Learning Neural Network Policies with Guided Search under Unknown Dynamics",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS),",
"year": 2014
},
{
"authors": [
"F.L. Lewis",
"S. Jagannathan",
"A. Yesildirak"
],
"title": "Neural network control of robot manipulators and nonlinear systems",
"venue": null,
"year": 1998
},
{
"authors": [
"F.L. Lewis",
"D. Vrabie",
"V.L. Syrmos"
],
"title": "Optimal Control, 3rd edition",
"venue": null,
"year": 2012
},
{
"authors": [
"H. Li",
"A. Ravey",
"A. N’Diaye",
"A. Djerdir"
],
"title": "A Review of Energy Management Strategy for Fuel Cell Hybrid Electric Vehicle",
"venue": "In IEEE Vehicle Power and Propulsion Conference (VPPC),",
"year": 2017
},
{
"authors": [
"W.S. Lin",
"C.H. Zheng"
],
"title": "Energy management of a fuel cell/ultracapacitor hybrid power system using an adaptive optimal control method",
"venue": "Journal of Power Sources,",
"year": 2011
},
{
"authors": [
"P. Mehta",
"S. Meyn"
],
"title": "Qlearning and pontryagin’s minimum principle",
"venue": "In Proceedings of IEEE Conference on Decision and Control,",
"year": 2009
},
{
"authors": [
"D. Mitrovic",
"S. Klanke",
"S. Vijayakumar"
],
"title": "Adaptive Optimal Feedback Control with Learned Internal Dynamics Models",
"venue": null,
"year": 2010
},
{
"authors": [
"H. Modares",
"F.L. Lewis"
],
"title": "Optimal tracking control of nonlinear partiallyunknown constrainedinput systems using integral reinforcement",
"venue": "learning. Automatica,",
"year": 2014
},
{
"authors": [
"S.J. Moura",
"D.S. Callaway",
"H.K. Fathy",
"J.L. Stein"
],
"title": "Tradeoffs between battery energy capacity and stochastic optimal power management in plugin hybrid electric vehicles",
"venue": "Journal of Power Sources,",
"year": 1959
},
{
"authors": [
"S.N. Motapon",
"L. Dessaint",
"K. AlHaddad"
],
"title": "A Comparative Study of Energy Management Schemes for a FuelCell Hybrid Emergency Power System of MoreElectric Aircraft",
"venue": "IEEE Transactions on Industrial Electronics,",
"year": 2014
},
{
"authors": [
"G. Paganelli",
"S. Delprat",
"T.M. Guerra",
"J. Rimaux",
"J.J. Santin"
],
"title": "Equivalent consumption minimization strategy for parallel hybrid powertrains",
"venue": "In IEEE 55th Vehicular Technology Conference, VTC Spring 2002 (Cat. No.02CH37367),",
"year": 2002
},
{
"authors": [
"F. Segura",
"J.M. Andújar"
],
"title": "Power management based on sliding control applied to fuel cell systems: A further step towards the hybrid control concept",
"venue": "Applied Energy,",
"year": 2012
},
{
"authors": [
"R.S. Sutton",
"A.G. Barto"
],
"title": "Reinforcement Learning: An Introduction",
"venue": null,
"year": 1998
},
{
"authors": [
"E. Theoddorou",
"Y. Tassa",
"E. Todorov"
],
"title": "Stochastic Differential Dynamic Programming",
"venue": "In Proceedings of American Control Conference,",
"year": 2010
},
{
"authors": [
"E. Todorov",
"Y. Tassa"
],
"title": "Iterative Local Dynamic Programming",
"venue": "In Proceedings of the IEEE International Symposium on ADP and RL,",
"year": 2009
},
{
"authors": [
"J.P. Torreglosa",
"P. Garcı́a",
"L.M. Fernández",
"F. Jurado"
],
"title": "Predictive Control for the Energy Management of a FuelCellBatterySupercapacitor Tramway",
"venue": "IEEE Transactions on Industrial Informatics,",
"year": 2014
},
{
"authors": [
"D. Vrabie"
],
"title": "Online adaptive optimal control for continuoustime systems",
"venue": "Ph.D. dissertation, University of Texas at Arlington,",
"year": 2010
},
{
"authors": [
"Dan Yu",
"Mohammadhussein Rafieisakhaei",
"Suman Chakravorty"
],
"title": "Stochastic Feedback Control of Systems with Unknown Nonlinear Dynamics",
"venue": "In IEEE Conference on Decision and Control,",
"year": 2017
},
{
"authors": [
"M.K. Zadeh"
],
"title": "Stability Analysis Methods and Tools for Power ElectronicsBased DC Distribution Systems: Applicable to OnBoard Electric Power Systems and Smart Microgrids",
"venue": null,
"year": 2016
},
{
"authors": [
"X. Zhang",
"C.C. Mi",
"A. Masrur",
"D. Daniszewski"
],
"title": "Wavelet transformbased power management of hybrid vehicles with multiple onboard energy sources including fuel cell, battery and ultracapacitor",
"venue": "Journal of Power Sources,",
"year": 2008
},
{
"authors": [
"C. Zheng",
"S.W. Cha",
"Y. Park",
"W.S. Lim",
"G. Xu"
],
"title": "PMPbased power management strategy of fuel cell hybrid vehicles considering multiobjective optimization",
"venue": "International Journal Precision Engineering and Manufacturing,",
"year": 2013
},
{
"authors": [
"C.H. Zheng",
"G.Q. Xu",
"Y.I. Park",
"W.S. Lim",
"S.W. Cha"
],
"title": "Prolonging fuel cell stack lifetime based on Pontryagin’s Minimum Principle in fuel cell hybrid vehicles and its economic influence evaluation",
"venue": "Journal Power Sources,",
"year": 2014
},
{
"authors": [
"X. Zhong",
"H. He",
"H. Zhang",
"Z. Wang"
],
"title": "Optimal Control for Unknown DiiscreteTime Nonlinear Markov Jump Systems Using Adaptive Dynamic Programming",
"venue": "IEEE Transactions on Neural networks and learning systems,",
"year": 2014
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Hybrid electric vehicles powered by fuel cells and batteries have attracted great enthusiasm in modern days as they have the potential to eliminate emissions from the transport sector. Now, both the fuel cells and batteries have got several operational challenges which make the separate use of each of them in automotive systems quite impractical. HEVs and PHEVs powered by conventional diesel engines and batteries merely reduce the emissions, but cannot eliminate completely. Some of the drawbacks include carbon emission causing environmental pollution from fuel cells and long charging times, limited driving distance per charge, nonavailability of charging stations along the driving distance for the batteries. Fuel Cell powered Hybrid Electric Vehicles (FCHEVs) powered by fuel cells and batteries offer emissionfree operation while overcoming the limitations of driving distance per charge and long charging times. So, FCHEVs have gained significant attention in recent years. As we find, most of the existing research which studied and developed several types of Fuel and Energy Management Systems (FEMS) for transport applications include Sulaiman et al. (2018) who has presented a critical review of different energy and fuel management strategies for FCHEVs. Li et al. (2017) has presented an extensive review of FMS objectives and strategies for FCHEVs. These strategies, however can be divided into two groups, i.e., modelbased and modelfree. The modelbased methods mostly depend on the discretization of the state space and therefore suffers from the inherent curse of dimensionality. The coumputational complexity increases in an exponential fashion with the increase in the dimension of the state space. This is quite evident in the methods like statebased EMS (Jan et al., 2014; Zadeh et al., 2014; 2016), rulebased fuzzy logic strategy (Motapon et al., 2014), classical PI and PID strategies (Segura et al., 2012), Potryagin’s minimum principle (PMP) (Zheng et al., 2013; 2014), model predictive control (MPC) (Kim et al., 2007; Torreglosa et al., 2014) and differential dynamic programming (DDP) (Kim et al., 2007). Out of all these methods, differential dynamic programming is considered to be computationally quite\nefficient which rely on the linearization of the nonlinear system equations about a nominal state trajectory followed by a policy iteration to improve the policy. In this approach, the control policy for fuel optimization is used to compute the optimal trajectory and the policy is updated until the convergence is achieved.\nThe modelfree methods mostly deal with the Adaptive Dynamic Programming (Bithmead et al., 1991; Zhong et al., 2014) and Reinforcement Learning (RL) based strategies (Mitrovic et al., 2010; Khan et al., 2012) icluding DDP (Mayne et al., 1970). Here, they tend to compute the control policy for fuel optimization by continous engagement with the environment and measuring the system response thus enabling it to achieve at a solution of the DP equation recursively in an online fashion. In deep reinforcement learning, multilayer neural networks are used to represent the learning function using a nonlinear parameterized approximation form. Although a compact paremeterized form do exist for the learning function, the inability to know it apriori renders the method suffer from the curse of dimensionality (O(d2) where, d is the dimension of the state space), thus making it infeasible to apply to a highdimemsional fuel managememt system.\nThe problem of computational complexity of the traditional RL methods like policy iteration (PI) and value iteration (VI) (Bellman et al., 1954; 2003; Barto et al., 1983; Bartsekas, 2007) can be overcome by a simulation based approach (Sutton et al., 1998) where the policy or the value function can be parameterized with sufficient accuracy using a small number of parameters. Thus, we will be able to transform the optimal control problem to an approximation problem in the parameter space (Bartesekas et al., 1996; Tsitsiklis et al., 2003; Konda et al., 2004) side stepping the need for model knowledge and excessive computations. However, the convergence requires sufficient exploration of the stateaction space and the optimality of the obtained policy depends primarily on the accuracy of the parameterization scheme.\nAs a result, a good approximation of the value function is of utmost importance to the stability of the closedloop system and it requires convergence of the unknown parameters to their optimal values. Hence, this sufficient exploration condition manifests itself as a persistence of excitation (PE) condition when RL is implemented online (Mehta et al., 2009; Bhasin et al., 2013; Vrabie, 2010) which is impossible to be guaranteed a priori.\nMost of the traditional approaches for fuel optimization are unable to adrress the robustness issue. The methods described in the literature including those of PID (Segura et al.,2012), Model Predictive Control (MPC) (Kim et al.,2007;Torreglosa et al., 2014) and Adaptive Dynamic Programming (Bithmead et al.,1991;Zhong et al., 2014) as well as the simulation based RL strategies (Bartesekas et al., 1996; Tsitsiklis et al., 2003; Konda et al., 2004 ) suffer from the drawback of providing a suboptimal solution in the presence of external disturbances and noise. As a result, application of these methods for fuel optimization for hybrid electric vehicles that are plagued by various disturbances in the form of sudden charge and fuel depletion, change in the environment and in the values of the parameters like remaining useful life, internal resistance, voltage and temperature of the battery, are quite impractical.\nThe fuel optimization problem for the hybrid electric vehicle therefore have been formulated as a fully observed stochastic Markov Decision Process (MDP). Instead of using Trajectoryoptimized LQG (TLQG) or Model Predictive Control (MPC) to provide a suboptimal solution in the presence of disturbances and noice, we propose a deep reinforcement learningbased optimization strategy using concurrent learning (CL) that uses the statederivativeactionreward tuples to present a robust optimal solution. The convergence of the weight estimates of the policy and the value function to their optimal values justifies our claim. The two major contributions of the proposed approch can be therefore be summarized as follows:\n1) The popular methods in RL literature including policy iteration and value iteration suffers from the curse of dimensionality owing to the use of a simulation based technique which requires sufficient exploration of the state space (PE condition). Therefore, the proposed modelbased RL scheme aims to relax the PE condition by using a concurrent learning (CL)based system identifier to reduce the computational complexity. Generally, an estimate of the true controller designed using the CLbased method introduces an approximate estimation error which makes the stability analysis of the system quite intractable. The proposed method, however, has been able to establish the stability of the closedloop system by introducing the estimation error and analyzing the augmented system trajectory obtained under the influnece of the control signal.\n2) The proposed optimization algorithm implemented for fuel management in hybrid electric vehicles will nullify the limitations of the conventional fuel management approaches (PID, Model Predictive Control, ECMS, PMP) and traditional RL approaches (Adaptive Dynamic Proagramming, DDP, DQN), all of which suffers from the problem of suboptimal behaviour in the presence of external disturbances, modeluncertainties, frequent charging and discharging, change of enviroment and other noises. The Hinfinity (H∞) performance index defined as the ratio of the disturbance to the control energy has been established for the RL based optimization technique and compared with the traditional strategies to address the robustness issue of the proposed design scheme.\nThe rest of the paper is organised as follows: Section 2 presents the problem formulation including the openloop optimization and reinforcement learningbased optimal controller design which have been described in subsections 2.1 and 2.2 respectively. The parametric system identification and value function approximation have been detailed in subsections 2.2.1 and 2.2.2. This is followed by the stability and robustness analysis (using the Hinfinity (H∞) performance index ) of the closed loop system in subsection 2.2.4. Section 3 provides the simulation results and discussion followed by the conclusion in Section 4."
},
{
"heading": "2 PROBLEM FORMULATION",
"text": "Considering the fuel management system of a hybrid electric vehicle as a continous time affine nonlinear dynamical system:\nẋ = f(x,w) + g(x)u, y = h(x, v) (1)\nwhere, x ∈ Rnx , y ∈ Rny , u ∈ Rnu are the state, output and the control vectors respectively, f(.) denotes the drift dynamics and g(.) denotes the control effectivenss matrix. The functions f and h are assumed to be locally Lipschitz continuous functions such that f(0) = 0 and∇f(x) is continous for every bounded x ∈Rnx . The process noise w and measurement noise v are assumed to be zeromean, uncorrelated Gausssian white noise with covariances W and V, respectively.\nAssumption 1: We consider the system to be fully observed:\ny = h(x, v) = x (2)\nRemark 1: This assumption is considered to provide a tractable formulation of the fuel management problem to side step the need for a complex treatment which is required when a stochastic control problem is treated as partially observed MDP (POMDP).\nOptimal Control Problem: For a continous time system with unknown nonlinear dynamics f(.), we need to find an optimal control policy πt in a finite time horizon [0, t] where πt is the control policy at\ntime t such that πt = u(t) to minimize the cost function given by J = ∫ t\n0\n(xTQx+uRuT )dt+xTFx\nwhere, Q,F > 0 and R ≥ 0."
},
{
"heading": "2.1 OPEN LOOP OPTIMIZATION",
"text": "Considering a noisefree nonlinear stochastic dynamical system with unknown dynamics:\nẋ = f(x, 0) + g(x)u, y = h(x, v) = x (3)\nwhere, x0 ∈ Rnx , y ∈ Rny , u ∈ Rnu are the initial state, output and the control vectors respectively, f(.) have their usual meanings and the corresponding cost function is given by Jd (x0, ut) =∫ t\n0\n(xTQx+ uRuT )dt+ xTFx.\nRemark: We have used piecewise convex function to approximate the nonconvex fuel function globally which has been used to formulate the cost function for the fuel optimization.\nThe open loop optimization problem is to find the control sequence ut such that for a given initial state x0,\nūt = arg min Jd(x0, ut),\nsubject to ẋ = f(x, 0) + g(x)u,\ny = h(x, v) = x.\n(4)\nThe problem is solved using the gradient descent approach (Bryson et al., 1962; Gosavi et al., 2003), and the procedure is illustrated as follows: Starting from a random initial value of the control sequence U(0) = [ut(0)] the control policy is updated iteratively as U (n+1) = U (n) − α∇UJd(x0, U (n)), (5) until the convergence is achieved upto a certain degree of accuracy where U (n) denotes the control value at the nth iteration and α is the step size parameter. The gradient vector is given by:\n∇UJd(x0, U (n)) = ( ∂Jd ∂u0 , ∂Jd ∂u1 , ∂Jd ∂u2 , ....., ∂Jd ∂ut )(x0,ut) (6)\nThe Gradient Descent Algorithm showing the approach has been detailed in the Appendix A.1.\nRemark 2: The open loop optimization problem is thus solved using the gradient descent approach considering a blackbox model of the underlying system dynamics using a sequence of inputoutput tests without having the perfect knowlegde about the nonlinearities in the model at the time of the design. This method proves to be a very simple and useful strategy for implementation in case of complex dynamical systems with complicated costtogo functions and suitable for parallelization."
},
{
"heading": "2.2 REINFORCEMENT LEARNING BASED OPTIMAL CONTROLLER DESIGN",
"text": "Considering the affine nonlinear dynamical system given by equation (1), our objective is to design a control law to track the optimal timevarying trajectory x̄(t) ∈ Rnx . A novel cost function is formulated in terms of the tracking error defined by e = x(t) − x̄(t) and the control error defined by the difference between the actual control signal and the desired optimal control signal. This formulation helps to overcome the challenge of the infinte cost posed by the cost function when it is defined in terms of the tarcking error e(t) and the actual control signal signal u(t) only (Zhang et al., 2011; Kamalapurkar et al., 2015). The following assumptions is made to determine the desired steady state control.\nAssumption 2: (Kamalapurkar et al., 2015) The function g(x) in equation (1) is bounded, the matrix g(x) has full column rank for all x(t) ∈ Rnx and the function g+ : Rn→ RmXn which is defined as g+ = (gT g)−1 is bounded and locally Lipschitz.\nAssumption 3: (Kamalapurkar et al., 2015) The optimal trajectory is bounded by a known positive constant b R such that ‖x̄‖ ≤ b and there exists a locally Lipschitz function hd such that ˙̄x = hd (x̄) and g(x̄) g+(x̄)(hd(x̄)  f(x̄)) = hd(x̄)  f(x̄).\nUsing the Assumption 2 and Assumption 3, the control signal ud required to track the desired trajectory x̄(t) is given as ud(x̄) = g+d (hd(x̄) − fd) where fd = f(x̄) and g + d = g\n+(x̄). The control error is given by µ = u(t)  ud(x̄). The system dynamics can now be expressed as\nζ̇ = F (ζ) +G(ζ)µ (7)\nwhere, the merged state ζ(t) ∈ R2n is given by ζ(t) = [eT , x̄T ]T and the functions F (ζ) and G(ζ) are defined as F (ζ) = [fT (e+ x̄)− hTd + uTd (x̄)gT (e+ x̄), hTd ]T and G(ζ) = [gT (e+ x̄), 0mXn]T where, 0mXn denotes a matrix of zeroes. The control error µ is treated hereafter as the design variable. The control objective is to solve a finitehorizon optimal tracking problem online, i.e., to design a control signal µ that will minimize the costtogo function, while tracking the desired\ntrajectory, is given by J(ζ, µ) = ∫ t\n0\nr(ζ(τ), µ(τ))dτ where, the local cost r : R2nXRm → R is\ngiven as r(ζ, τ) = Q(e) + µTRµ, R RmXm is a positive definite symmetric matrix and Q : Rn → R is a continous positive definite function. Based on the assumption of the existence of an optimal policy, it can be characterized in terms of the value function V ∗ : R2n → R which is defined as V ∗(ζ) =\nminµ(τ) U τ Rt>0 ∫ t 0 r(φu(π, t, ζ), µ(τ))dτ , where U ∈ Rm is the action space and φu(t; t0, ζ0) is the trajectory of the system defined by equation (10) with the control effort µ : R>0 → Rm with the initial condition ζ0 ∈ R2n and the initial time t0 ∈ R>0. Taking into consideration that an optimal policy exists and that V ∗ is continously differentiable everywhere, the closedform solution\n(Kirk, 2004) is given as µ∗(ζ) = 1/2 R−1GT (ζ)(∇ζV ∗(ζ))T where, ∇ζ(.) = ∂(.)\n∂x . This satisfies\nthe HamiltonJacobiBellman (HJB) equation (Kirk, 2004) given as ∇ζV ∗(ζ)(F (ζ) +G(ζ)µ∗(ζ)) + Q̄(ζ) + µ∗T (ζ)Rµ∗(ζ) = 0 (8) where, the initial condition V ∗ = 0, and the funtion Q̄ : R2n → R is defined as Q̄([eT , x̂T ]T ) = Q(e) where, (e(t), x̂(t)) ∈ Rn. Since, a closedform solution of the HJB equation is generally infeasible to obtain, we sought an approximate solution. Therefore, an actorcritic based method is used to obtain the parametric estimates of the optimal value function and the optimal policy which are given as V̂ (ζ, Ŵc) and µ̂(ζ, Ŵa) where, Ŵc ∈ RL and Ŵa ∈ RL define the vector paramater estimates. The task of the actor and critic is to learn the corresponding parameters. Replacing the estimates V̂ and µ̂ for V ∗ and µ̂∗ in the HJB equation, we obtain the residual error, also known as the Bell Error (BE) as δ(ζ, Ŵc, Ŵa) = Q̄(ζ) + µ̂T (ζ, Ŵa)Rµ̂(ζ, Ŵa) +∇ζ V̂ (ζ, Ŵc)(F (ζ) +G(ζ)µ̂(ζ, Ŵa)) where, δ : R2n X RL X RL → R. The solution of the problem requires the actor and the critic to find a set of parameters Ŵa and Ŵc respectively such that δ(ζ, Ŵc, Ŵa) = 0 and µ̂T (ζ, Ŵa) = 1/2 R−1GT (ζ)(∇ζV ∗(ζ))T where, ∀ζ ∈ Rn. As the exact basis fucntion for the approximation is not known apriori, we seek to find a set of approximate parameters that minimizes the BE. However, an uniform approximation of the value function and the optimal control policy over the entire operating domain requires to find parameters that will able to minimize the error Es : RL X RL → R defined as Es(Ŵc, Ŵa) = supζ(δ, Ŵc, Ŵa) thus, making it necessary to have an exact knowledge of the system model. Two of the most popular methods used to render the design of the control strategy robust to system uncertainties in this context are integral RL (Lewis et al., 2012; Modares et al., 2014) and state derivative estimation (Bhasin et al., 2013; Kamalapurkar et al., 2014). Both of these methods suffer from the persistence of exitation(PE) condition that requires the state trajectory φû(t; t0, ζ0) to cover the entire operating domain for the convergence of the parameters to their optimal values. We have relaxed this condition where the integral technique is used in augmentation with the replay of the experience where every evaluation of the BE is intuitively formalized as a gained experience, and these experiences are kept in a history stack so that they can be iteratively used by the learning algorithm to improve data efficiency.\nTherefore, to relax the PE condition, the we have developed a CLbased system identifier which is used to model the parametric estimate of the system drift dynamics and is used to simulate the experience by extrapolating the Bell Error (BE) over the unexplored territory in the operating domain thereby, prompting an exponential convergence of the parameters to their optimal values."
},
{
"heading": "2.2.1 PARAMETRIC SYSTEM IDENTIFICATION",
"text": "Defined by any compact set C ⊂ R, the function f can be defined using a neural network (NN) as f(x) = θTσf (Y Tx1) + 0(x) where, x1 = [1, xT ]T ∈ Rn+1, θ ∈ Rn+1Xp and Y ∈ Rn+1Xp indicates the constant unknown outputlayer and hiddenlayer NN weight, σf : Rp→ Rp+1 denotes a bounded NN activation function, θ: Rn → Rnis the function reconstruction error, p ∈ N denotes the number of NN neurons. Using the universal functionional approximation property of single layer NNs, given a constant matrix Y such that the rows of σf (Y Tx1) form a proper basis, there exist constant ideal weights θ and known constants θ̄, ̄θ, ̄′θ ∈ R such that θ < θ̄ <∞, supx C  θ(x) < ̄θ, supx C ∇x θ(x) < ̄θ where, . denotes the Euclidean norm for vectors and the Frobenius norm for matrix (Lewis et al., 1998). Taking into consideration an estimate θ̂ ∈ Rp+1Xn of the weight matrix θ , the function f can be approximated by the function f̂ : R2n X Rp+1Xn → Rn which is defined as f̂(ζ, θ̂) = θ̂Tσθ(ζ), where σθ : R2n → Rp+1 can be defined as σθ(ζ) = σf (Y T [1, eT + x̄T ]T ). An estimator for online identification of the drift dynamics is developed\n˙̂x = θ̂Tσθ(ζ) + g(x)u+ kx̃ (9)\nwhere, x̃ = x− x̂ and k R R is a positive constant learning gain.\nAssumption 4: A history stack containing recorded stateaction pairs {xj , uj}Mj=1 along with numerically computed state derivatives { ˙̄xj} M j=1 that satisfies λmin (∑M j=1 σfjσ T fj ) = σθ >\n0, ‖ ˙̄xj − ẋj‖ < d̄,∀j is available a priori, where σfj , σf ( Y T [ 1, xTj ]T) , d̄ ∈ R is a known\npositive constant, ẋj = f (xj) + g (xj)uj and λmin(·) denotes the minimum eigenvalue.\nThe weight estimates θ̂ are updated using the following CL based update law:\n˙̂ θ = Γθσf ( Y Tx1 ) x̃T + kθΓθ M∑ j=1 σfj ( ˙̄xj − gjuj − θ̂Tσfj )T (10)\nwhere kθ ∈ R is a constant positive CL gain, and Γθ ∈ Rp+1×p+1 is a constant, diagonal, and positive definite adaptation gain matrix. Using the identifier, the BE in (3) can be approximated as\nδ̂ ( ζ, θ̂, Ŵc, Ŵa ) = Q̄(ζ) + µ̂T ( ζ, Ŵa ) Rµ̂ ( ζ, Ŵa ) +∇ζ V̂ ( ζ, Ŵa )( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa\n)) (11) The BE is now approximated as\nδ̂ ( ζ, θ̂, Ŵc, Ŵa ) = Q̄(ζ) + µ̂T ( ζ, Ŵa ) Rµ̂ ( ζ, Ŵa ) +∇ζ V̂ ( ζ, Ŵa )( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa\n)) (12) In equation (12), Fθ(ζ, θ̂) = θ̂Tσθ(ζ)− g(x)g+ (xd) θ̂Tσθ ([ 0n×1xd ])\n0n×1 , and F1(ζ) =[ (−hd + g (e+ xd) g+ (xd)hd) T , hTd ]T ."
},
{
"heading": "2.2.2 VALUE FUNCTION APPROXIMATION",
"text": "As V ∗ and µ∗ are functions of the state ζ, the optimization problem as defined in Section 2.2 is quite an intractable one, so the optimal value function is now represented as C ⊂ R2n using a NN as V ∗(ζ) = WTσ(ζ)+ (ζ),whereW ∈ RL denotes a vector of unknown NN weights, σ : R2n →RL indicates a bounded NN activation function, : R2n → R defines the function reconstruction error, and L ∈ N denotes the number of NN neurons. Considering the universal function approximation property of single layer NNs, for any compact set C ⊂ R2n, there exist constant ideal weights W and known positive constants W̄ , ̄, and ′ ∈ R such that ‖W‖ ≤ W̄ <∞ supζ∈C ‖ (ζ)‖ ≤ ̄, and supζ∈C ‖∇ζ (ζ)‖ ≤ ̄′ (Lewis et al., 1998). A NN representation of the optimal policy is obtained as\nµ∗(ζ) = −1 2 R−1GT (ζ)\n( ∇ζσT (ζ)W +∇ζ T (ζ) ) (13)\nTaking the estimates Ŵc and Ŵa for the ideal weightsW , the optimal value function and the optimal policy are approximated as V̂ ( ζ, Ŵc ) = ŴTc σ(ζ), µ̂ ( ζ, Ŵa ) = − 12R\n−1GT (ζ)∇ζσT (ζ)Ŵa. The optimal control problem is therefore recast as to find a set of weights Ŵc and Ŵa online to minimize the error Êθ̂ ( Ŵc, Ŵa ) = supζ∈χ\n∣∣∣δ̂ (ζ, θ̂, Ŵc, Ŵa)∣∣∣ for a given θ̂, while simultaneously improving θ̂ using the CLbased update law and ensuring stability of the system using the control law\nu = µ̂ ( ζ, Ŵa ) + ûd(ζ, θ̂) (14)\nwhere, ûd(ζ, θ̂) = g+d ( hd − θ̂Tσθd ) , and σθd = σθ ([ 01×n x T d ]T) . σθ ([ 01×n x T d ]T) . The error between ud and ûd is included in the stability analysis based on the fact that the error trajectories generated by the system ė = f(x)+g(x)u− ẋd under the controller in (14) are identical to the error trajectories generated by the system ζ̇ = F (ζ) + G(ζ)µ under the control law µ = µ̂ ( ζ, Ŵa ) + g+d θ̃ Tσθd + g + d θd, where θd , θ (xd)."
},
{
"heading": "2.2.3 EXPERIENCE SIMULATION",
"text": "The simulation of experience is implemented by minimizing a squared sum of BEs over finitely many points in the state space domain as the calculation of the extremum (supremum) in Êθ̂ is not tractable. The details of the analysis has been explained in Appendix A.2 which facilitates the aforementioned approximation."
},
{
"heading": "2.2.4 STABILITY AND ROBUSTNESS ANALYSIS",
"text": "To perform the stability analysis, we take the nonautonomous form of the value function (Kamalapurkar et al., 2015) defined by V ∗t : Rn X R → R which is defined as V ∗t (e, t) = V ∗ ([ eT , xTd (t) ]T) ,∀e ∈ Rn, t ∈ R, is positive definite and decrescent. Now, V ∗t (0, t) = 0,∀t ∈ R and there exist class K functions v : R → R and v̄ : R → R such that v(‖e‖) ≤ V ∗t (e, t) ≤ v̄(‖e‖), for all e ∈ Rn and for all t ∈ R. We take an augemented state given as Z ∈ R2n+2L+n(p+1) is defined as\nZ = [ eT , W̃Tc , W̃ T a , x̃ T , (vec(θ̃))T ]T\n(15)\nand a candidate Lyapunov function is defined as\nVL(Z, t) = V ∗ t (e, t) +\n1 2 W̃Tc Γ −1W̃c + 1 2 W̃Ta W̃a 1 2 x̃T x̃+ 1 2 tr ( θ̃TΓ−1θ θ̃ ) (16)\nwhere, vec (·) denotes the vectorization operator. From the weight update in Appendix A.2 we get positive constants γ, γ̄ ∈ R such that γ ≤ ∥∥Γ−1(t)∥∥ ≤ γ̄,∀t ∈ R. Taking the bounds on Γ and V ∗t and the fact that tr ( θ̃TΓ−1θ θ̃ ) = (vec(θ̃))T ( Γ−1θ ⊗ Ip+1 ) (vec(θ̃)) the candidate Lyapunov function be bounded as vl(‖Z‖) ≤ VL(Z, t) ≤ v̄l(‖Z‖) (17) for all Z ∈ R2n+2L+n(p+1) and for all t ∈ R, where vl : R → R and vl : R → R are class K functions. Now, Using (1) and the fact that V ∗t (e(t), t) = V̇\n∗(ζ(t)),∀t ∈ R, the timederivative of the candidate Lyapunov function is given by\nV̇L = ∇ζV ∗ (F +Gµ∗)− W̃Tc Γ−1 ˙̂ Wc −\n1 2 W̃Tc Γ −1Γ̇Γ−1W̃c\n−W̃Ta ˙̂ Wa + V̇0 +∇ζV ∗Gµ−∇ζV ∗Gµ∗\n(18)\nUnder sufficient gain conditions (Kamalapurkar et al., 2014), using (9), (10)(13), and the update laws given by Ŵc, Γ̇ and Ŵa the timederivative of the candidate Lyapunov function can be bounded as V̇L ≤ −vl(‖Z‖),∀‖Z‖ ≥ v−1l (ι),∀Z ∈ χ (19) where ι is a positive constant, and χ ⊂ R2n+2L+n(p+1) is a compact set. Considering (13) and (15), the theorem 4.18 in (Khalil., 2002) can be used to establish that every trajectory Z(t) satisfying ‖Z (t0)‖ ≤ vl−1 (vl(ρ)) , where ρ is a positive constant, is bounded for all t ∈ R and satisfies lim supt→∞ ‖Z(t)‖ ≤ vl−1 ( vl ( v−1l (ι) )) . This aforementioned analysis addresses the stability issue of the closed loop system.\nThe robustness criterion requires the algorithm to satisfy the following inequality (Gao et al., 2014) in the presence of external disturbances with a prespecified performance index γ known as the Hinfinity (H∞) performance index, given by∫ t\n0\n‖y(t)‖2dt < γ2 ∫ t\n0\n‖w(t)‖dt (20)\nwhere, y(t) is the output of the system, w(t) is the factor that accounts for the modeling errors, parameter uncertainties and external disturbances and γ is the ratio of the output energy to the disturbance in the system.\nUsing (1) and the fact that V ∗t (e(t), t) = V̇ ∗(ζ(t)),∀t ∈ R, the timederivative of the candidate Lyapunov function is given by\nV̇L = ∇ζV ∗ (F +Gµ∗)− W̃Tc Γ−1 ˙̂ Wc −\n1 2 W̃Tc Γ −1Γ̇Γ−1W̃c\n−W̃Ta ˙̂ Wa + V̇0 +∇ζV ∗Gµ−∇ζV ∗Gµ∗\n(21)\nGao et al. (2014) has shown if (22) and (23) is satisfied, then it can written that\n0 < VL(T ) = ∫ t 0 V̇L(t) ≤ − ∫ t 0 yT (t)y(t)dt+ γ2 ∫ t 0 wT (t)w(t)dt (22)\nThus, the performance inequality constraint given by ∫ t\n0 ‖y(t)‖2dt < γ2 ∫ t 0 ‖w(t)‖dt in terms of γ\nis satisfied."
},
{
"heading": "3 SIMULATION RESULTS AND DISCUSSION",
"text": "Here, we are going to present the simulation results to demonstrate the performance of the proposed method with the fuel management system of the hybrid electric vehicle. The proposed concurrent learning based RL optimization architecture has been shown in the Figure 1.\nIn this architecture, the simulated stateactionderivative triplets performs the action of concurrent learning to approximate the value function weight estimates to minimize the bell error (BE). The history stack is used to store the evaluation of the bell error which is carried out by a dynamic system identifier as a gained experience so that it can iteratively used to reduce the computational burden.\nA simple two dimensional model of the fuel management system is being considered for the simulation purpose to provide a genralized solution that can be extended in other cases of high dimensional system.\nWe consider a two dimensional nonlinear model given by\nf = [ x1 x2 0 0 0 0 x1 x2(1− (cos(2x1 + 2)2)) ] ∗ abc d , g = [ 0cos(2x1 + 2) ] , w(t) = sin(t) (23) where a, b, c, d ∈ R are unknown positive parameters whose values are selected as a=−1, b= 1, c = −0.5, d = −0.5, x1 and x2 are the two states of the hybrid electric vehicle given by the charge present in the battery and the amount of fuel in the car respectively and w(t) = sin(t) is a sinusoidal disturbance that is used to model the external disturbance function. The control\nobjective is to minimize the cost function given by J(ζ, µ) = ∫ t\n0\nr(ζ(τ), µ(τ))dτ where, the local\ncost r : R2nXRm → R is given as r(ζ, τ) = Q(e) + µTRµ, R RmXm is a positive definite symmetric matrix and Q : Rn → R is a continous positive definite function, while following the desired trajectory x̄ We chhose Q = I2x2 and R = 1. The optimal value function and optimal control for the system (15) are V ∗(x) = 1\n2 x21 +\n1 2 x22 and u ∗(x) = −cos(2(x1) + 2)x2. The basis\nfunction σ : R2 → R3 for value function approximation is σ = [x21, x21x22, x22]. The ideal weights are W = [0.5, 0, 1]. The initial value of the policy and the value function weight estimates are Ŵc = Ŵa = [1, 1, 1]T , least square gain is Γ(0) = 100I3X3 and that of the system states are x(0) = [−1,−1]T . The state estimates x̂ and θ̂ are initialized to 0 and 1 respectively while the history stack for the CL is updated online. Here, Figure 2 and Figure 3 shows the state trajectories obtained by the\ntraditional RL methods and that obtained by the CLbased RL optimization technique respectively in the presence of disturbances. It can be stated that settling time of trajectories obtained by the proposed method is significantly less (almost 40 percent) as compared with that of the conventional RL strategies thus justifying the uniqueness of the method and causing a saving in fuel consumption by about 4045 percent. Figure 4 shows the corresponding control inputs whereas Figure 5 and\nFigure 6 indicates the convergence of the NN weight functions to their optimal values. The H∞ performance index in Figure 7 shows a value of 0.3 for the CLbased RL method in comparison to 0.45 for the traditional RLbased control design which clearly establishes the robustness of our proposed design."
},
{
"heading": "4 CONCLUSION",
"text": "In this paper, we have proposed a robust concurrent learning based deep Rl optimization strategy for hybrid electric vehicles. The uniqueness of this method lies in use of a concurrent learning based RL optimization strategy that reduces the computational complexity significanty in comparison to the traditional RL approaches used for the fuel management system mentioned in the literature. Also, the use of the the Hinfinity (H∞) performance index in case of RL optimization for the first time takes care of the robustness problems that most the fuel optimization nethods suffer from. The simulation results validate the efficacy of the method over the conventional PID, MPC as well as traditional RL based optimization techniques. Future work will generalize the approach for largescale partially observed uncertain systems and it will also incorporate the movement of neighbouring RL agents."
},
{
"heading": "A APPENDIX",
"text": "A.1 THE GRADIENT DESCENT ALGORITHM\nThe Gradient Descent Algorithm has been explained as follows:\nAlgorithm; Gradient Descent\nInput : Design Parameters U (0) = u0t , α, h R\nOutput : Optimal control sequence {ūt} 1. n← 0,∇UJd ( x0, U (0) ) ←\n2. while∇UJd ( x0, U (n) ) ≥ do\n3. Evaluate the cost function with control U (n)\n4. Perturb each control variable u(n)i by h, i = 0, · · · , t, and calculate the gradient vector ∇UJd ( x0, U (n) ) using (7) and (8)\n5. Update the control policy: U (n+1) ← U (n) − α∇UJd ( x0, U (n) )\n6. n← n+ 1"
},
{
"heading": "7. end",
"text": "8. {ūt} ← U (n)\nA.2 EXPERIENCE SIMULATION\nAssumption 5: (Kamalapurkar et al., 2014) There exists a finite set of points {ζi ∈ C  i = 1, · · · , N} and a constant c ∈ R such that 0 < c = 1N ( inft∈R≥t0 ( λmin {∑N i=1 ωiω T i ρi })) where ρi =1 +\nνωTi Γωi ∈ R, and ωi =∇ζσ (ζi) ( Fθ ( ζi, θ̂ ) + F1 (ζi) +G (ζi) µ̂ ( ζi, Ŵa )) .\nUsing Assumption 5, simulation of experience is implemented by the weight update laws given by\nŴc = −ηc1Γ ω\nρ δ̂t − ηc2 N Γ N∑ i=1 ωi ρi δ̂ti (24)\nΓ̇ = ( βΓ− ηc1Γ ωωT\nρ2 Γ\n) 1{‖Γ‖≤Γ̄}, ‖Γ (t0)‖ ≤ Γ̄, (25)\n˙̂ Wa = −ηa1 ( Ŵa − Ŵc ) − ηa2Ŵa + ( ηc1G T σ Ŵaω T\n4ρ + N∑ i=1 ηc2G T σiŴaω T i 4Nρi\n) Ŵc (26)\nwhere, ω = ∇ζσ(ζ) ( Fθ(ζ, θ̂) + F1(ζ) +G(ζ)µ̂ ( ζ, Ŵa )) , Γ ∈ RL×L is the leastsquares gain\nmatrix, Γ̄ ∈ R denotes a positive saturation constant, β ∈ R indicates a constant forgetting factor, ηc1, ηc2, ηa1, ηa2 ∈ R defines constant positive adaptation gains, 1{·} denotes the indicator function of the set {·}, Gσ = ∇ζσ(ζ)G(ζ)R−1GT (ζ)∇ζσT (ζ), and ρ = 1 + νωTΓω, where ν ∈ R is a positive normalization constant. In the above weight update laws, for any function ξ(ζ, ·), the notation ξi, is defined as ξi = ξ (ζi, ·) , and the instantaneous BEs δ̂t and δ̂ti are given as δ̂t = δ̂ ( ζ, Ŵc, Ŵa, θ̂ ) and δ̂ti = δ̂ ( ζi, Ŵc, Ŵa, θ̂ ) ."
}
]
 2,020
 A ROBUST FUEL OPTIMIZATION STRATEGY FOR HY BRID ELECTRIC VEHICLES: A DEEP REINFORCEMENT LEARNING BASED CONTINUOUS TIME DESIGN AP

SP:43e525fb3fa611df7fd44bd3bc9843e57b154c66
 [
"This paper proposes 3 deep generative models based on VAEs (with different encoding schemes for RNA secondary structure) for the generation of RNA secondary structures. They test each model on 3 benchmark tasks: unsupervised generation, semisupervised learning and targeted generation. This paper has many interesting contributions — a comparison of VAE models that use different RNA secondary structure encoding schemes, including traditional dotbracket notation and a more complex hierarchical encoding, and they also introduce various decoding schemes to encourage valid secondary structures. "
]
 Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions. The design of large scale and complex biological structures spurs dedicated graphbased deep generative modeling techniques, which represents a key but underappreciated aspect of computational drug discovery. In this work, we investigate the principles behind representing and generating different RNA structural modalities, and propose a flexible framework to jointly embed and generate these molecular structures along with their sequence in a meaningful latent space. Equipped with a deep understanding of RNA molecular structures, our most sophisticated encoding and decoding methods operate on the molecular graph as well as the junction tree hierarchy, integrating strong inductive bias about RNA structural regularity and folding mechanism such that high structural validity, stability and diversity of generated RNAs are achieved. Also, we seek to adequately organize the latent space of RNA molecular embeddings with regard to the interaction with proteins, and targeted optimization is used to navigate in this latent space to search for desired novel RNA molecules.
 [
{
"affiliations": [],
"name": "Zichao Yan"
},
{
"affiliations": [],
"name": "William L. Hamilton"
}
]
 [
{
"authors": [
"Bronwen L Aken",
"Premanand Achuthan",
"Wasiu Akanni",
"M Ridwan Amode",
"Friederike Bernsdorff",
"Jyothish Bhai",
"Konstantinos Billis",
"Denise CarvalhoSilva",
"Carla Cummins",
"Peter Clapham"
],
"title": "Ensembl 2017",
"venue": "Nucleic Acids Research,",
"year": 2016
},
{
"authors": [
"Yuri Burda",
"Roger B. Grosse",
"Ruslan Salakhutdinov"
],
"title": "Importance Weighted Autoencoders",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2016
},
{
"authors": [
"Tian Qi Chen",
"Yulia Rubanova",
"Jesse Bettencourt",
"David Duvenaud"
],
"title": "Neural Ordinary Differential Equations",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS),",
"year": 2018
},
{
"authors": [
"Xi Chen",
"Diederik P. Kingma",
"Tim Salimans",
"Yan Duan",
"Prafulla Dhariwal",
"John Schulman",
"Ilya Sutskever",
"Pieter Abbeel"
],
"title": "Variational Lossy Autoencoder",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2017
},
{
"authors": [
"Xinshi Chen",
"Yu Li",
"Ramzan Umarov",
"Xin Gao",
"Le Song"
],
"title": "RNA secondary structure prediction by learning unrolled algorithms",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2020
},
{
"authors": [
"Kyunghyun Cho",
"Bart van Merrienboer",
"Çaglar Gülçehre",
"Dzmitry Bahdanau",
"Fethi Bougares",
"Holger Schwenk",
"Yoshua Bengio"
],
"title": "Learning phrase representations using RNN encoderdecoder for statistical machine translation",
"venue": "In Conference on Empirical Methods in Natural Language Processing (EMNLP),",
"year": 2014
},
{
"authors": [
"Alexander Churkin",
"Matan Drory Retwitzer",
"Vladimir Reinharz",
"Yann Ponty",
"Jérôme Waldispühl",
"Danny Barash"
],
"title": "Design of RNAs: comparing programs for inverse RNA folding",
"venue": "Briefings in Bioinformatics, 19(2):350–358,",
"year": 2017
},
{
"authors": [
"K.B. Cook",
"S. Vembu",
"K.C.H. Ha",
"H. Zheng",
"K.U. Laverty",
"T.R. Hughes",
"D. Ray",
"Q.D. Morris"
],
"title": "RNAcompeteS: Combined RNA sequence/structure preferences for RNA binding proteins derived from a singlestep in vitro",
"venue": "selection. Methods,",
"year": 2017
},
{
"authors": [
"Robin D. Dowell",
"Sean R. Eddy"
],
"title": "Evaluation of several lightweight stochastic contextfree grammars for RNA secondary structure prediction",
"venue": "BMC Bioinformatics,",
"year": 2004
},
{
"authors": [
"David Duvenaud",
"Dougal Maclaurin",
"Jorge AguileraIparraguirre",
"Rafael GómezBombarelli",
"Timothy Hirzel",
"Alán AspuruGuzik",
"Ryan P. Adams"
],
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS),",
"year": 2015
},
{
"authors": [
"Sean R. Eddy",
"Richard Durbin"
],
"title": "RNA sequence analysis using covariance models",
"venue": "Nucleic Acids Research, 22(11):2079–2088,",
"year": 1994
},
{
"authors": [
"Ahmed Elnaggar",
"Michael Heinzinger",
"Christian Dallago",
"Ghalia Rehawi",
"Yu Wang",
"Llion Jones",
"Tom Gibbs",
"Tamas Feher",
"Christoph Angerer",
"Martin Steinegger",
"DEBSINDHU BHOWMIK",
"Burkhard Rost"
],
"title": "ProtTrans: Towards Cracking the Language of Life’s Code Through SelfSupervised Deep Learning and High Performance Computing",
"venue": "bioRxiv,",
"year": 2020
},
{
"authors": [
"Justin Gilmer",
"Samuel S. Schoenholz",
"Patrick F. Riley",
"Oriol Vinyals",
"George E. Dahl"
],
"title": "Neural Message Passing for Quantum Chemistry",
"venue": "In International Conference on Machine Learning (ICML),",
"year": 2017
},
{
"authors": [
"Rafael GómezBombarelli",
"Jennifer N Wei",
"David Duvenaud",
"José Miguel HernándezLobato",
"Benjamı́n SánchezLengeling",
"Dennis Sheberla",
"Jorge AguileraIparraguirre",
"Timothy D Hirzel",
"Ryan P Adams",
"Alán AspuruGuzik"
],
"title": "Automatic chemical design using a datadriven continuous representation of molecules",
"venue": "ACS central science,",
"year": 2018
},
{
"authors": [
"Will Grathwohl",
"Ricky T.Q. Chen",
"Jesse Bettencourt",
"Ilya Sutskever",
"David Duvenaud"
],
"title": "FFJORD: freeform continuous dynamics for scalable reversible generative models",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2019
},
{
"authors": [
"Junxian He",
"Daniel Spokoyny",
"Graham Neubig",
"Taylor BergKirkpatrick"
],
"title": "Lagging Inference Networks and Posterior Collapse in Variational Autoencoders",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2019
},
{
"authors": [
"S. Hochreiter",
"J. Schmidhuber"
],
"title": "Long shortterm memory",
"venue": "Neural Computing,",
"year": 1997
},
{
"authors": [
"Wengong Jin",
"Regina Barzilay",
"Tommi S. Jaakkola"
],
"title": "Junction Tree Variational Autoencoder for Molecular Graph Generation",
"venue": "In International Conference on Machine Learning (ICML),",
"year": 2018
},
{
"authors": [
"Ioanna Kalvari",
"Joanna Argasinska",
"Natalia QuinonesOlvera",
"Eric P Nawrocki",
"Elena Rivas",
"Sean R Eddy",
"Alex Bateman",
"Robert D Finn",
"Anton I Petrov"
],
"title": "Rfam 13.0: shifting to a genomecentric resource for noncoding RNA families",
"venue": "Nucleic Acids Research, 46(D1):D335–D342,",
"year": 2017
},
{
"authors": [
"Diederik P. Kingma",
"Max Welling"
],
"title": "AutoEncoding Variational Bayes",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2014
},
{
"authors": [
"Durk P Kingma",
"Tim Salimans",
"Rafal Jozefowicz",
"Xi Chen",
"Ilya Sutskever",
"Max Welling"
],
"title": "Improved variational inference with inverse autoregressive flow",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS),",
"year": 2016
},
{
"authors": [
"Matt J. Kusner",
"Brooks Paige",
"José Miguel HernándezLobato"
],
"title": "Grammar Variational Autoencoder",
"venue": "In International Conference on Machine Learning (ICML),",
"year": 2017
},
{
"authors": [
"Yujia Li",
"Daniel Tarlow",
"Marc Brockschmidt",
"Richard S. Zemel"
],
"title": "Gated Graph Sequence Neural Networks",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2016
},
{
"authors": [
"Qi Liu",
"Miltiadis Allamanis",
"Marc Brockschmidt",
"Alexander L. Gaunt"
],
"title": "Constrained Graph Variational Autoencoders for Molecule Design",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS),",
"year": 2018
},
{
"authors": [
"R. Lorenz",
"S.H. Bernhart",
"C. Honer Zu Siederdissen",
"H. Tafer",
"C. Flamm",
"P.F. Stadler",
"I.L. Hofacker"
],
"title": "ViennaRNA Package 2.0",
"venue": "Algorithms for Molecular Biology,",
"year": 2011
},
{
"authors": [
"David H. Mathews",
"Matthew D. Disney",
"Jessica L. Childs",
"Susan J. Schroeder",
"Michael Zuker",
"Douglas H. Turner"
],
"title": "Incorporating chemical modification constraints into a dynamic programming algorithm for prediction of RNA secondary structure",
"venue": "Proceedings of the National Academy of Sciences of the United States of America,",
"year": 2004
},
{
"authors": [
"Eric P. Nawrocki",
"Sean R. Eddy"
],
"title": "Infernal 1.1: 100fold faster RNA homology searches",
"venue": "Bioinformatics, 29(22):2933–2935,",
"year": 2013
},
{
"authors": [
"Carlos Oliver",
"Vincent Mallet",
"Roman Sarrazin Gendron",
"Vladimir Reinharz",
"William L Hamilton",
"Nicolas Moitessier",
"Jérôme Waldispühl"
],
"title": "Augmented base pairing networks encode RNAsmall molecule binding preferences",
"venue": "Nucleic Acids Research, 48(14):7690–7699,",
"year": 2020
},
{
"authors": [
"Norbert Pardi",
"Michael J. Hogan",
"Frederick W. Porter",
"Drew Weissman"
],
"title": "mRNA vaccines — a new era in vaccinology",
"venue": "Nature Reviews Drug Discovery,",
"year": 2018
},
{
"authors": [
"Lorena G. Parlea",
"Blake A. Sweeney",
"Maryam HosseiniAsanjan",
"Craig L. Zirbel",
"Neocles B. Leontis"
],
"title": "The RNA 3D Motif Atlas: Computational methods for extraction, organization and evaluation of RNA motifs",
"venue": null,
"year": 2016
},
{
"authors": [
"D. Ray",
"H. Kazan",
"E.T. Chan",
"L. Pena Castillo",
"S. Chaudhry",
"S. Talukder",
"B.J. Blencowe",
"Q. Morris",
"T.R. Hughes"
],
"title": "Rapid and systematic analysis of the RNA recognition specificities of RNAbinding proteins",
"venue": "Nature Biotechnology,",
"year": 2009
},
{
"authors": [
"Sashank J. Reddi",
"Satyen Kale",
"Sanjiv Kumar"
],
"title": "On the convergence of adam and beyond",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2018
},
{
"authors": [
"Vladimir Reinharz",
"Antoine Soulé",
"Eric Westhof",
"Jérôme Waldispühl",
"Alain Denise"
],
"title": "Mining for recurrent longrange interactions in RNA structures reveals embedded hierarchies in network families",
"venue": "Nucleic Acids Research, 46(8):3841–3851,",
"year": 2018
},
{
"authors": [
"Danilo Jimenez Rezende",
"Shakir Mohamed"
],
"title": "Variational Inference with Normalizing Flows",
"venue": "In International Conference on Machine Learning (ICML),",
"year": 2015
},
{
"authors": [
"Alexander Rives",
"Joshua Meier",
"Tom Sercu",
"Siddharth Goyal",
"Zeming Lin",
"Demi Guo",
"Myle Ott",
"C. Lawrence Zitnick",
"Jerry Ma",
"Rob Fergus"
],
"title": "Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences",
"venue": "bioRxiv,",
"year": 2019
},
{
"authors": [
"Frederic Runge",
"Danny Stoll",
"Stefan Falkner",
"Frank Hutter"
],
"title": "Learning to Design RNA",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2019
},
{
"authors": [
"Roman SarrazinGendron",
"HuaTing Yao",
"Vladimir Reinharz",
"Carlos G. Oliver",
"Yann Ponty",
"Jérôme Waldispühl"
],
"title": "Stochastic Sampling of Structural Contexts Improves the Scalability and Accuracy of RNA 3D Module Identification",
"venue": "In Russell Schwartz (ed.), Research in Computational Molecular Biology,",
"year": 2020
},
{
"authors": [
"Thomas Schlake",
"Andreas Thess",
"Mariola FotinMleczek",
"KarlJosef Kallen"
],
"title": "Developing mRNAvaccine technologies",
"venue": "RNA Biology,",
"year": 2012
},
{
"authors": [
"Michael Sejr Schlichtkrull",
"Thomas N. Kipf",
"Peter Bloem",
"Rianne van den Berg",
"Ivan Titov",
"Max Welling"
],
"title": "Modeling Relational Data with Graph Convolutional Networks",
"venue": "In The Semantic Web 15th International Conference,",
"year": 2018
},
{
"authors": [
"Jaswinder Singh",
"Jack Hanson",
"Kuldip Paliwal",
"Yaoqi Zhou"
],
"title": "RNA secondary structure prediction using an ensemble of twodimensional deep neural networks and transfer learning",
"venue": "Nature Communications,",
"year": 2019
},
{
"authors": [
"Richard Stefl",
"Lenka Skrisovska",
"Frédéric H.T. Allain"
],
"title": "RNA sequence and shapedependent recognition by proteins in the ribonucleoprotein particle",
"venue": "EMBO reports,",
"year": 2005
},
{
"authors": [
"Teague Sterling",
"John J. Irwin"
],
"title": "ZINC 15 – Ligand Discovery for Everyone",
"venue": "Journal of Chemical Information and Modeling,",
"year": 2015
},
{
"authors": [
"Ashish Vaswani",
"Noam Shazeer",
"Niki Parmar",
"Jakob Uszkoreit",
"Llion Jones",
"Aidan N. Gomez",
"Lukasz Kaiser",
"Illia Polosukhin"
],
"title": "Attention is all you need",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS),",
"year": 2017
},
{
"authors": [
"Petar Velickovic",
"Guillem Cucurull",
"Arantxa Casanova",
"Adriana Romero",
"Pietro Liò",
"Yoshua Bengio"
],
"title": "Graph Attention Networks",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2018
},
{
"authors": [
"Zichao Yan",
"William L. Hamilton",
"Mathieu Blanchette"
],
"title": "Graph neural representational learning of RNA secondary structures for predicting RNAprotein interactions",
"venue": "Bioinformatics, 36 (Supplement 1):i276–i284,",
"year": 2020
},
{
"authors": [
"Guandao Yang",
"Xun Huang",
"Zekun Hao",
"MingYu Liu",
"Serge J. Belongie",
"Bharath Hariharan"
],
"title": "PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows",
"venue": "In International Conference on Computer Vision (ICCV),",
"year": 2019
},
{
"authors": [
"Jiaxuan You",
"Bowen Liu",
"Zhitao Ying",
"Vijay S. Pande",
"Jure Leskovec"
],
"title": "Graph Convolutional Policy Network for GoalDirected Molecular Graph Generation",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS),",
"year": 2018
},
{
"authors": [
"Ĝk ∈ N(Ĝi"
],
"title": "The internal structure of TGRU is equivalent to the tree encoder employed in Jin et al. (2018), which is essentially a neural analogue of the belief propagation algorithm on junction trees. Nevertheless, we write down the message passing formulas of TGRU here: sĜi,Ĝj",
"venue": null,
"year": 2018
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "There is an increasing interest in developing deep generative models for biochemical data, especially in the context of generating druglike molecules. Learning generative models of biochemical molecules can facilitate the development and discovery of novel treatments for various diseases, reducing the lead time for discovering promising new therapies and potentially translating in reduced costs for drug development (Stokes et al., 2020). Indeed, the study of generative models for molecules has become a rich and active subfield within machine learning, with standard benchmarks (Sterling & Irwin, 2015), a set of wellknown baseline approaches (GómezBombarelli et al., 2018; Kusner et al., 2017; Liu et al., 2018; Jin et al., 2018), and highprofile cases of realworld impact 1.\nPrior work in this space has focused primarily on the generation of small molecules (with less than 100 atoms), leaving the development of generative models for larger and more complicated biologics and biosimilar drugs (e.g., RNA and protein peptides) an open area for research. Developing generative models for larger biochemicals is critical in order to expand the frontiers of automated treatment design. More generally, developing effective representation learning for such complex biochemicals will allow machine learning systems to integrate knowledge and interactions involving these biologicallyrich structures.\nIn this work, we take a first step towards the development of deep generative models for complex biomolecules, focusing on the representation and generation of RNA structures. RNA plays a crucial\n1e.g. LambdaZero project for exascale search of druglike molecules.\nrole in protein transcription and various regulatory processes within cells which can be influenced by its structure (Crick, 1970; Stefl et al., 2005), and RNAbased therapies are an increasingly active area of research (Pardi et al., 2018; Schlake et al., 2012), making it a natural focus for the development of deep generative models. The key challenge in generating RNA molecules—compared to the generation of small molecules—is that RNA involves a hierarchical, multiscale structure, including a primary sequential structure based on the sequence of nucleic acids as well as more complex secondary and tertiary structures based on the way that the RNA strand folds onto itself. An effective generative model for RNA must be able to generate sequences that give rise to these more complex emergent structures.\nThere have been prior works on optimizing or designing RNA sequences—using reinforcement learning or blackbox optimization—to generate particular RNA secondary structures (Runge et al., 2019; Churkin et al., 2017). However, these prior works generally focus on optimizing sequences to conform to a specific secondary structure. In contrast, our goal is to define a generative model, which can facilitate the sampling and generation of diverse RNA molecules with meaningful secondary structures, while also providing a novel avenue for targeted RNA design via search over a tractable latent space.\nKey contributions. We propose a series of benchmark tasks and deep generative models for the task of RNA generation, with the goal of facilitating future work on this important and challenging problem. We propose three interrelated benchmark tasks for RNA representation and generation:\n1. Unsupervised generation: Generating stable, valid, and diverse RNAs that exhibit complex secondary structures.\n2. Semisupervised learning: Learning latent representations of RNA structure that correlate with known RNA functional properties.\n3. Targeted generation: Generating RNAs that exhibit particular functional properties.\nThese three tasks build upon each other, with the first task only requiring the generation of stable and valid molecules, while the latter two tasks involve representing and generating RNAs that exhibit particular properties. In addition to proposing these novel benchmarks for the field, we introduce and evaluate three generative models for RNA. All three models build upon variational autoencoders (VAEs) (Kingma & Welling, 2014) augmented with normalizing flows (Rezende & Mohamed, 2015; Kingma et al., 2016), and they differ in how they represent the RNA structure. To help readers better understand RNA structures and properties, a selfcontained explanation is provided in appendix B.\nThe simplest model (termed LSTMVAE) learns using a stringbased representation of RNA structure. The second model (termed GraphVAE) leverages a graphbased representation and graph neural network (GNN) encoder approach (Gilmer et al., 2017). Finally, the most sophisticated model (termed HierVAE) introduces and leverages a novel hierarchical decomposition of the RNA structure. Extensive experiments on our newly proposed benchmarks highlight how the hierarchical approach allows more effective representation and generation of complex RNA structures, while also highlighting important challenges for future work in the area."
},
{
"heading": "2 TASK DESCRIPTION",
"text": "Given a dataset of RNA molecules, i.e. sequences of nucleotides and corresponding secondary structures, our goals are to: (a) learn to generate structurally stable, diverse, and valid RNA molecules that reflect the distribution in this training dataset; (b) learn latent representations that reflect the functional properties of RNA. A key factor in both these representation and generation processes is that we seek to jointly represent and generate both the primary sequence structure as well as the secondary structure conformation. Together, these two goals lay the foundations for generating novel RNAs that satisfy certain functional properties. To meet these goals, we create two types of benchmark datasets, each one focusing on one aspect of the above mentioned goals:\nUnlabeled and variablelength RNA. The first dataset contains unlabeled RNA with moderate and highlyvariable length (32512 nts), obtained from the human transcriptome (Aken et al., 2016) and through which we focus on the generation aspect of structured RNA and evaluate the validity, stability and diversity of generated RNA molecules. In particular, our goal with this dataset is to jointly generate RNA sequences and secondary structures that are biochemically feasible (i.e., valid), have\nlow free energy (i.e., stable), and are distinct from the training data (i.e., diverse). We will give an extended assessment of the generation aspect under different circumstances, e.g., when constraining the generation procedures with explicit rules.\nLabeled RNA. The second dataset is pulled and processed from a previous study on in vitro RNAprotein interaction, which features labeled RNAs with shorter and uniform length (40 nts) (Cook et al., 2017). With this dataset, our objective is slightly expanded (to include obj. a), so that the latent space is adequately organized and reflective of the interaction with proteins. Therefore, key assessment for the latent space includes AUROC for the classification of protein binding, which is crucial for the design of desired novel RNA molecules.\nEssentially, this creates slight variations in the task formulation, with the first dataset suited to unsupervised learning of a generative model, while the second datasets involves additional supervision (e.g., for a semisupervised model or targeted generation). Our specific modeling choices, to be introduced in section 3, are invariant to different task formulations, and flexible enough to handle different representations of RNA secondary structures. We refer readers to appendix C for detailed explanation for the dataset and evaluation metrics on the generated molecules and latent embeddings."
},
{
"heading": "3 METHODS",
"text": "In this section, we introduce three different generative models for RNA. All three models are based upon the variational autoencoder (VAE) framework, involving three key components:\n1. A probabilistic encoder network qφ(zx), which generates a distribution over latent states given an input representation of an RNA. We experiment with three different types of input encodings for RNA sequence and secondary structures (see Figure S1: a dotbracket annotated string, a graph with adjacency matrix representing basepairings, and a graph augmented with a hierarchical junction tree annotation for the secondary structure.\n2. A probabilistic decoder network pθ(xz), which defines a joint distribution over RNA sequences and secondary structures, conditioned on a latent input. As with the encoder network, we design architectures based on a linearized string decoding and a graphbased hierarchical junctiontree decoding approach.\n3. A parameterized prior pψ(z), which defines a prior distribution over latent states and is learned based on a continuous normalizing flow (CNF) (Chen et al., 2018).\nFor all the approaches we propose, the model is optimized via stochastic gradient descent to minimize the evidence lower bound (ELBO): L = −Eqφ(zx)[pθ(xz)] + β KL(qφ(zx)pψ(z)) where β is a term to allow KLannealing over the strength of the prior regularization.\nIn the following sections, we explain our three different instantiations of the encoder (section 3.1), decoder (section 3.2), as well as our procedures to structurally constrain the decoding process using domain knowledge (section 3.3) and our procedures to avoid posterior collapse (section 3.4)."
},
{
"heading": "3.1 ENCODING RNA SECONDARY STRUCTURES",
"text": "The input to the encoder is a structured RNA molecule, with its sequence given by an ordered array of nucleotides x1 . . . xL, with xi ∈ {A,C,G,U}, where L is the length of the sequence, and its secondary structure, either rep\nresented as (1) a dotbracket string S = ẋ1 . . . ẋL with ẋi ∈ {., (, )}; (2) or as a graph G with two types of edges — covalent bonds along the RNA backbone, and hydrogen bonds between the base\npairs 2. We use xuv to denote edge features between nucleotides u and v; (3) or as a hypergraph T — a depthfirst ordered array of subgraphs Ĝ1 . . . ĜD with L(Ĝi) ∈ {S,H, I,M} indicating the subgraph label, and I(Ĝi) = {jj ∈ {1 . . . L}} indicating the assignment of nucleotides to each subgraph.\nEncoding RNA secondary structure as sequence. First, we obtain a joint encoding over the nucleotide and the dotbracket annotation, using the joint sequencestructure vocabulary {A,C,G,U} × {., (, )}. Then, these onehot encodings are processed by a stacked bidirectional LSTM (Hochreiter & Schmidhuber, 1997), followed by a multihead selfattention module (Vaswani et al., 2017) to weigh different positions along the RNA backbone. A global maxpooling is used to aggregate the information into hS , and then we obtain mean µS and log variance log σS from hS through linear transformations, and draw latent encoding zS from N (µS , σS) using the reparameterization trick (Kingma & Welling, 2014).\nLearning graph representation of RNA secondary structure. To encode the graph view G of an RNA secondary structure, we pass rounds of neural messages along the RNA structure, which falls into the framework of Message Passing Neural Network (MPNN) as originally discussed in Gilmer et al. (2017) and similarly motivated by Jin et al. (2018).\nFor much longer RNAs, it is conceptually beneficial to pass more rounds of messages so that a nucleotide may receive information on its broader structural context. However, this may introduce undesired effects such as training instability and oversmoothing issues. Therefor , we combine our MPNN network with gating mechanism, which is collectively referred as the GMPNN:\nv̂t−1uv = σ(W g local[xu xuv] +W g msg ∑ w∈N(u) vt−1wu ) (1) vtuv = GRU(v̂ t−1 uv , v t−1 uv ) (2) where [. . .  . . . ] denotes concatenation, σ denotes the activation function and GRU indicates the gated recurrent unit (Cho et al., 2014). Then, after T iterations of message passing, the final nucleotide level embedding is given by: hu = σ(W g emb[xu  ∑ v∈N(u) v T vu]). Before pooling the nucleotide level embeddings into the graph level, we pass h1 . . . hL through a single bidirectional LSTM layer, obtaining ĥ1 . . . ĥL at each step, and hg = max({ĥii ∈ 1...L}). The latent encoding zG is similarly obtained from hG using the reparameterization trick.\nHierarchical encoding of the RNA hypergraph. To encode the junction tree T of RNA, we employ a type of GRU specifically suited to treelike structures, which has previously been applied in works such as GGNN (Li et al., 2016) and JTVAE (Jin et al., 2018). We refer to this tree encoding network as TGRU, and the format of its input is shown in Figure 1.\nOne major distinction between our RNA junction tree and the one used for chemical compounds (Jin et al., 2018) is that an RNA subgraph assumes more variable nucleotide composition such that it is impossible to enumerate based on the observed data. Therefore, we need to dynamically compute the features for each node in an RNA junction tree based on its contained nucleotides, in a hierarchical manner to leverage the nucleotide level embeddings learnt by GMPNN.\nConsidering a subgraph Ĝi in the junction tree T , we initialize its node feature with: xĜi = [L(Ĝi) maxu∈I(Ĝi) hu]. Notably, maxu∈Ĝi hu is a maxpooling over all nucleotides assigned to Ĝi, and nucleotide embedding hu comes from GMPNN. To compute and pass neural messages between adjacent subgraphs in the RNA junction tree T , we use the TGRU network in Eq.3\nvtĜi,Ĝj = TGRU(xĜi , {v t−1 Ĝk,Ĝi  Ĝk ∈ N(Ĝi)}) (3) hĜi = σ(W t emb[xĜi  ∑ Ĝ∈N(Ĝi) hĜ ]) (4) with details of TGRU provided in the appendix D, and compute the embeddings for subgraphs with Eq. 4. Further, we obtain a depthfirst traversal of the subgraph embeddings hĜ1 . . . hĜD′ which is also the order for hierarchical decoding to be discussed later. This ordered array of embeddings is processed by another bidirectional LSTM , and the final tree level representation hT is again given by the maxpooling over the biLSTM outputs. Likewise, latent encoding zT is obtained from hT .\n2We do not differentiate the number of hydrogen bonds, which can be different depending on the basepairs. For example, GC has three hydrogen bonds whereas AU only contains two."
},
{
"heading": "3.2 RNA MOLECULAR GENERATION",
"text": "Decoding linearized sequence and structure. In this setting, the decoder simply autoregressively decodes a token at each step, from the joint sequencestructure vocabulary mentioned before in section 3.1, plus one additional symbol to signal the end of decoding. To simplify the design choice, we use a singlelayered forwarddirectional LSTM, and its hidden state is initialized with the latent encoding z, which can be either zS , zG or zT .\nHierarchically decoding hypergraph and nucleotide segments. The input to this more sophisticated hierarchical decoder are latent encodings zG which contains order and basic connectivity information of the nucleotides, and zT which contains higher order information about the arrangements of nucleotide branches and their interactions. We give a concise description of the decoding procedures here, along with a detailed algorithm in appendix E. On a high level, we hierarchically decode the tree structure in a depthfirst manner, and autoregressively generate a nucleotide segment for each visited tree branch. For these purposes, we interleave three types of prediction (Figure 2).\nDenote the current tree node at decode step t and at the ith visit as Ĝt,i, whose features include (1) its node label L(Ĝt,i) and, (2) a summary over the already existing i − 1 nucleotide segments max{hl,ju u ∈ Ĝt,i and l < t and j < i}, with l denoting the nucleotide is decoded at step l, and j indicating the nucleotide belongs to the jth branch (this feature is simply zeros when i = 1). Then, its local feature xĜt,i is defined as the concatenation of (1) and (2).\nWe make use of a notion called node state: hĜt,i , which is obtained by: hĜt,i = TGRU(xĜt,i , {vĜ,Ĝt,i  Ĝ ∈ N(Ĝt,i)}). Note its similarity to Eq. 3, and hĜt,i is used to make:\n• topological prediction in Figure 2 (A), to determine if the decoder should expand to a new tree node or backtrack to its parent node, based on MLPtopo(hĜt,i); • tree node prediction in Figure 2 (B), on condition that a new tree node is needed due to a possible topological expansion. This procedure determines the label of the new tree node from the set of {S,H, I,M}, based on MLPnode(hĜt,i); • nucleotide segment decoding in Figure 2 (C), using a singlelayered LSTM, whose initial hidden state is MLPdec([hĜt,i  zT  zG ]). The start token is the last nucleotide from the last segment.\nOur hierarchical decoder starts off by predicting the label of the root node using zT , followed by topological prediction on the root node and decoding the first nucleotide segment. The algorithm terminates upon revisiting the root node, topologically predicted to backtrack and finishing the last segment of the root node. The decoded junction tree naturally represents an RNA secondary structure that can be easily transformed to the dotbracket annotation, and the RNA sequence is simply recovered by connecting nucleotide segments along the depthfirst traversal of the tree nodes."
},
{
"heading": "3.3 STRUCTURALLY CONSTRAINED DECODING",
"text": "To better regulate the decoding process so that generated RNAs have valid secondary structures, a set of constraints can be added to the decoding procedures at the inference stage. Essentially, a valid RNA secondary structure needs to observe the following rules: (1) basepairing complementarity,\nwhich means only the canonical basepairs and Wobble basepairs are allowed, i.e. [AU], [GC] and [GU]; (2) hairpin loop should have a minimum of three unpaired nucleotides, i.e. for any two paired bases at position i and j, i− j > 3; (3) each nucleotide can only be paired once, and overlapping pairs are disallowed.\nWe will translate the above rules into specific and applicable constraints, depending on specific decoders. For the sake of space, we only give a broad remark and leave more details in the appendix.\nLinearized decoding constraints. Since the linearized decoder simply proceeds in an autoregressive fashion, constraints can be easily enforced in a way that at each step, a nucleotide with an appropriate structural annotation is sampled by making use of masks and renormalizing the probabilities. Likewise, a stop token can only sampled when all opening nucleotides have been closed. More details to follow in appendix F.\nHierarchical decoding constraints. The specific set of constraints for hierarchical decoding is discussed in appendix G. Overall, considering the different natures of the three associated types of prediction, each one should require a set of different strategies, which are once again applicable by adding proper masks before sampling. As shown in the algorithm in appendix E, the set of constraints are applied to line 13, 24 and 14 with marked asterisk."
},
{
"heading": "3.4 AVOIDING POSTERIOR COLLAPSE",
"text": "As discussed in a line of previous works, VAEs with strong autoregressive decoders are susceptible to posterior collapse, an issue where the decoder simply ignores the latent encoding of the encoder (He et al., 2019). Therefore, to avoid posterior collapsing, we make use of a carefully chosen KL annealing schedule during training to help the encoder adapt its information content in the latent encoding and in coordination with the decoder. This schedule is detailed in section 4. We also learn a parameterized prior as suggested in Chen et al. (2017), but using a CNF instead, following a similar implementation to Yang et al. (2019), with details given in appendix H.\nOur KL annealing schedule is chosen based on empirical observations, as to our knowledge, there has yet to exist any principled methods of selecting such schedule. We have used diagnostic metrics such as mutual information (He et al., 2019) and active units (Burda et al., 2016) along with a validation set to select a proper KL annealing schedule which is to be described later in section 4"
},
{
"heading": "4 RESULTS",
"text": "We consider three modes of evaluation: (1) unsupervised RNA generation; (2) generation using semisupervised VAE models and (3) targeted RNA design from an organized latent space. Results are presented below, and relevant hyperparameters can be found in Table S1.\nUnsupervised RNA generation. Here, we evaluate generated RNAs from models trained on the unlabeled RNA dataset for 20 epochs using a KL annealing schedule including 5 epochs of warmup,\nfollowed by gradually increasing the KL annealing term to 3e3 (for LSTMVAE and GraphVAE), or 2e3 (for HierVAE). The KL annealing schedule was chosen using a validation set of 1,280 RNAs.\nTable 1 compares the generation capability of different models, from the posterior as well as the prior distribution, and in scenarios such as applying structural constraints to the decoding process or not. It clearly shows that our most advanced model, HierVAE which employs a hierarchical view of the structure in its encoding/decoding aspects, achieves the best performance across different evaluation regimes, generating valid and stable RNAs even when the decoding processed is unconstrained. It is also observed that despite having structural constraints, the validity of our generated RNAs are always slightly below 100%. This can be explained by the threshold hyperparameter which sets the maximum number of steps for topological prediction as well as the maximal length of each nucleotide segment, as shown in Algorithm 1 in appendix E.\nTo further demonstrate the benefits of model training from structural constraints, we sample RNAs from the prior of an untrained HierVAE model. With structural constraints, the validity amounts to 66.34% with an extremely high free energy deviation of 22.613. Without structural constraints, the validity translates to a mere 9.37% and the model can only decode short single stranded RNAs as it lacks the knowledge of constructing more complex structures. This comparison illustrates that model training is essential for obtaining stable RNA folding.\nThe junction tree hierarchy of RNAs developed in our work shares certain modelling similarities with the probabilistic context free grammar (Dowell & Eddy, 2004) used by covariance models (CM) (Eddy & Durbin, 1994). Infernal (Nawrocki & Eddy, 2013) is one of the representative works based on CM, which is capable of sampling RNA secondary structures from a CM built around a consensus secondary structure for a conserved RNA family. However, due to the lack of homologous sequences in our dataset, Infernal is seriously limited and can only sample single stranded RNAs.\nFigure 3 illustrate RNA structures generated using HierVAE from a randomly chosen short path through the latent space. Notably, latent encoding provided by HierVAE translates smoothly in the RNA structure domain: nearby points in the latent space result in highly similar, yet different, structures. The generated structures are particularly stable for short and mediumsize RNAs, and slightly less so for longer RNAs with highly complex structures. A sidebyside comparison between generated RNA secondary structures and MFE structures in Figure S3 shows that generated structures can evolve smoothly in the latent space along with their corresponding MFE structures. We also visualize neighborhoods of a Cysteinecarrying transfer RNA and a 5S ribosomal RNA in figure S4 and S5.\nSupervised RNA generation. We then evaluate our generative approaches in a semisupervised setting using seven RBP binding data sets from RNAcompeteS. First, we compare the efficacy of different representational choices while excluding the generative components, i.e. we jointly train VAE encoders followed by simple MLP classifiers on top of the latent encodings for binary classification on RBP binding.\nTable S3 shows that incorporating RNA secondary structures is overall beneficial for the classification accuracy, except for RBMY where a model with access to RNA sequence alone (LSTMSeqOnly) has the best performance. Notably, different choices for representing RNA secondary structures do not lead to large variation in performance, with the exception of HuR and SLBP, where graph based representations have an advantage over the linearized structural representation. On the other hand, sequence based models often have comparable performance, possibly due to the capability of inferring RNA secondary structures from short RNA sequences. It is also worth exploring other invitro selection protocols such as HTRSELEX which can select RNAs with higher binding affinities than RNAcompeteS that only involves a single selection step.\nNext, we train full generative models (encoder, decoder, latent CNF and MLP embedding classifier), and show the results in Table 2. Since our strategy for targeted RNA design makes use of seed molecules in the latent space, we mainly sample RNAs from the posterior distribution of these semisupervised VAE models. Therefore, we select a KL annealing schedule that tends to retain more information in the latent encodings, i.e. setting maximum β to 5e4 and training 10 epochs.\nResults are promising in that classification AUROC measured by the heldout test set is comparable to the fully supervised classification models in Table S3, and much better compared to models only using fixed and pretrained VAE embeddings as shown in Table S2. Also, RNA structures generated from the posterior distribution, even under the setting of unconstrained and deterministic decoding, have high success rates, very stable conformation and good reconstruction accuracy.\nTargeted RNA design. We next studied the task of designing RNAs with high RBP binding affinity. Starting from the latent encodings of 10,000 randomly chosen RNA molecules that have negative labels in each RNAcompeteS test set, and use activation maximization to gradually alter the latent encodings so that the predicted binding probability from the embedding classifiers increases. These embedding classifiers have been trained jointly with the VAE models with accuracy reported earlier (Table 2). Then, we use separately trained full classifiers (also earlier shown in Table S3) as proxy of oracles for evaluating the “ground truth” probability of RBP binding. Table 3, report the\nsuccess rate (fraction of RNAs whose “ground truth” RBP binding probability was improved), along with the average improvement in binding probabilities. An example of a trajectory of optimized RNAs is shown in Fig. S6."
},
{
"heading": "5 RELATED WORK",
"text": "Over the years, the field of computational drug discovery has witnessed the emergence of graphcentric approaches. One of the earliest method, proposed in GómezBombarelli et al. (2018), is defined on the linearized format of molecular structures and represents a family of methods that\nrely on sequential models to represent and generate SMILES strings of chemical compounds. Later methods have sought to construct more chemical priors into the model, via (1) leveraging graph based representation and generation techniques, (2) enforcing direct chemical constraints to the decoding process, (3) considering a multiscale view of the molecular structures, or (4) using reinforcement learning to integrate more training signal of the molecular structure and function. As a result, greater success has been achieved by models such as Kusner et al. (2017); Liu et al. (2018); Jin et al. (2018); You et al. (2018) at generating and searching valid and more useful chemical compounds.\nGraph representation learning is at the heart of these more recent approaches, to help understand the rules governing the formation of these molecular structures, as well as the correspondence between structures and functions. Duvenaud et al. (2015) were among the first to apply GNN to learn molecular fingerprints, and the general neural message passing framework for molecules is proposed in Gilmer et al. (2017), which demonstrate the power of MPNN across various molecular benchmarking tasks. These prior works on molecular MPNN, together with other GNN architectures developed in other areas, such as considering relational edges (Schlichtkrull et al., 2018) and attention (Velickovic et al., 2018), have laid the foundation for the success of these deep generative models.\nDespite the fact that RNA molecules can adopt complex structures, dedicated graph representation learning techniques have been scarce, with some recent works beginning to leverage graph related learning techniques to predict RNA folding (Chen et al., 2020; Singh et al., 2019) and to represent RNA molecular structures (Yan et al., 2020; Oliver et al., 2020). Prior to our work, the design of RNA has mostly focused on the inverse design problem, which is to conditionally generate an RNA sequence whose MFE secondary structure corresponds to an input secondary structure. Therefore, the line of prior works have predominantly relied on sequential techniques, with some representative methods based on reinforcement learning (Runge et al., 2019), or more classically framed as a combinatorial optimization problem and solved with sampling based techniques (Churkin et al., 2017). These prior works are mainly concerned with querying from an energy model with fixed thermodynamic parameters and fixed dynamics of RNA folding, which is in itself limited compared to learning based approaches (Chen et al., 2020; Singh et al., 2019), and are unable to model a joint distribution over RNA sequences and possible folds."
},
{
"heading": "6 CONCLUSION AND FUTURE WORKS",
"text": "In this work we propose the first graphbased deep generative approach for jointly embedding and generating RNA sequence and structure, along with a series of benchmarking tasks. Our presented work has demonstrated impressive performance at generating diverse, valid and stable RNA secondary structures with useful properties.\nFor future works, there are several important directions to consider. First, it would be beneficial to obtain noncoding RNA families from the RFAM database (Kalvari et al., 2017) which would help our models learn more biologicallymeaningful representation indicative of RNA homology and functions, in addition to the evolutionarily conserved RNA structural motifs that would enable the generation of more stable RNA secondary structures. In that context, a detailed comparison to Infernal and other probabilistic contextfree grammar models would be meaningful.\nOn the methodological aspect, in light of the recent advances in protein sequences pretraining across a large evolutionaryscale (Rives et al., 2019; Elnaggar et al., 2020), our models for RNAs may similarly benefit by such a procedure with the data collected from RFAM. After the pretraining step, reinforcement learning can be used to finetune the generative component of our model with customizable rewards defined jointly on RNA structural validity, folding stability and functions such as binding to certain proteins.\nOn the evaluation side, it would be of great interest to analyze our models for any potential RNA tertiary structural motifs and to compare them with those deposited in the CaRNAval (Reinharz et al., 2018) or RNA 3D motifs database (Parlea et al., 2016). Our models would also need modifications to allow noncanonical interactions and pseudoknots, which are common in RNA tertiary structures.\nAll in all, the representation, generation and design of structured RNA molecules represent a rich, promising, and challenging area for future research in computational biology and drug discovery, and an opportunity to develop fundamentally new machine learning approaches."
},
{
"heading": "A ACKNOWLEDGEMENTS",
"text": "We would like to thank all members of the Hamilton lab, Blanchette lab, and the four anonymous reviewers for their insightful suggestions. This work was funded by a Genome Quebec/Canada grant to MB and by the Institut de Valorisation des Données (IAVDO) PhD excellence scholarship to ZY. WLH is supported by a Canada CIFAR AI Chair. We also thank Compute Canada for providing the computational resources."
},
{
"heading": "B BACKGROUND: RNA STRUCTURE AND KEY PROPERTIES",
"text": "Figure S1: A nested RNA secondary structure can be represented by: (A) dotbracket annotation, where basepairs corresponding to matching parentheses, or (B) a molecular planar graph with two types of edges, corresponding to consecutive nucleotides (backbone) and basepairing interactions, or (C) a junction tree where node are labeled as stems (S), hairpins (H), internal loops (I), or multiloops (M), and edges correspond to the connections between these elements. All three forms are equivalent.\nThe representation of an RNA molecule starts from its primary sequence structure—i.e., a single chain of nucleotides (adenine (A), cytosine (C), guanine (G) and uracil (U)). RNA sequences are flexible and can fold onto themselves, enabling the formation of bonds between complementary nucleotides (WatsonCrick basepairs [AU, GC], and Wobble basepairs [GU]), hence stabilizing the molecule 3. The set of pairs of interacting nucleotides in an RNA forms its socalled RNA secondary structure. In computational analyses of RNA, it is standard to assume that a secondary structure is nested: if [i, j] and [k, l] form base pairs with i < k, then either l < j (nesting) or k > j (nonoverlapping). This enables simple string or planar graph representations (Figure S1 a, b).\nThe nested structure assumption means that secondary structures can be modelled by a probabilistic context free grammar (Dowell & Eddy, 2004), or by the closely related junction tree structure (Figure S1 c) (SarrazinGendron et al., 2020), where each hypernode corresponds to a particular secondary substructure element: (1) stem: consecutive stacked basepairs locally forming a doublestranded structure; (2) hairpin loop : unpaired regions closed by a basepair; (3) internal loop: unpaired regions located between two stems; (4) multiloop: unpaired regions at the junction of at least three stems. Edges link elements that are adjacent in the structure.\nValidity and stability of RNA folding. The notion of free energy of RNA secondary structures can be used to characterize the stability of a particular conformation. Given an RNA sequence, there are combinatorially many valid RNA secondary structures which all need to obey a set of constraints (summarized in section 3.3). However, some structures are more stable than the others by having lower free energy. Therefore, these structures are more likely to exist (hence more useful) in reality due to the stochastic nature of RNA folding. The free energy of an RNA secondary structure can be estimated by an energybased model with thermodynamic parameters obtained from experiments (Mathews et al., 2004), wherein the minimum free energy (MFE) structure can be predicted, up to a reasonable approximation (Lorenz et al., 2011).4\n3There exists other noncanonical basepairs which are excluded from our current work. 4Throughout this work, we use RNAfold (Lorenz et al., 2011) to compute free energy as well as the MFE\nstructure, due to its interpretability and acceptable accuracy for moderately sized RNAs."
},
{
"heading": "C DATASET AND METRICS",
"text": "The unlabeled dataset is obtained from the complete human transcriptome which is downloaded from the Ensembl database (Aken et al. (2016); version GRCh38). We slice the transcripts into snippets with length randomly drawn between 32 and 512 nts, and use RNAfold to obtain the MFE structures. We randomly split the dataset into a training set that contains 1,149,859 RNAs, and 20,000 heldout RNAs for evaluating decoding from the posterior distribution. More information on the structural diversity and complexity of this dataset is shown in Figure S2, which should present significant challenges for our algorithms.\nThe labeled dataset is pulled from a previous study on sequence and structural binding preference of RNA binding proteins (RBP), using an in vitro selection protocol called RNAcompeteS (Cook et al., 2017) which generates synthesized RNA sequences bound or unbound to a given RBP. RNAs in this experiment are of uniform length, i.e. 40 nts, and offer a rich abundance of RNA secondary structures compared to its predecessor protocols such as RNAcompete (Ray et al., 2009; 2013). Since no benchmark has been ever established since its publication, we randomly sample 500,000 positive sequences bound to an RBP, and the same amount of negative sequences from the pool of unbound sequences, to curate a dataset for each of the seven RBPs investigated in the paper. Then, 80% of all RNAs are randomly selected to the train split, and the rest goes to the test split.\nOur evaluation scheme for the generated RNA secondary structures includes the following metrics:\n• validity: percentage of generated RNA secondary structures that conform to the structural constraints specified in section 3.3.\n• free energy deviation (FE DEV): difference of free energy between the generated RNA secondary structure and the MFE structure of the corresponding sequence, which quantifies the gap of both structures from an energy perspective. A lower FE DEV should indicate higher stability of generated RNAs.\n• free energy deviation normalized by length (Normed FE DEV): FE DEV divided by the length of generated RNA, which distributes the contribution of total FE DEV to each base.\n• 5mer sequence diversity: entropy of the normalized counts of 5mer substrings, which directly measures the diversity of RNA sequences, and indirectly for RNA secondary structures when this metric is combined with FE DEV, since monolithic structures of diverse sequences would lead to high FE DEV."
},
{
"heading": "D TREE ENCODING GRU",
"text": "Following Eq.3, TGRU computes a new message vtĜi,Ĝj from Ĝi and Ĝj , based on the features in Ĝi denoted by xĜi , as well as neural messages from neighboring subgraphs to Ĝi, i.e. {v t−1 Ĝk,Ĝi\n Ĝk ∈ N(Ĝi)}. The internal structure of TGRU is equivalent to the tree encoder employed in Jin et al. (2018), which is essentially a neural analogue of the belief propagation algorithm on junction trees. Nevertheless, we write down the message passing formulas of TGRU here:\nsĜi,Ĝj = ∑\nĜk∈N(Ĝi)\nvt−1 Ĝk,Ĝi\n(S1)\nzĜi,Ĝj = σ(W z[xĜi  sĜi,Ĝj ] + b z) (S2)\nrĜk,Ĝi = σ(W r[xĜi  v t−1 Ĝk,Ĝi ] + br) (S3) v̂Ĝi,Ĝj = Tanh(W [xĜi  ∑\nĜk∈N(Ĝi)\nrĜk,Ĝi · v t−1 Ĝk,Ĝi ]) (S4)\nvtĜi,Ĝj = (1− zĜi,Ĝj ) sĜi,Ĝj + zĜi,Ĝj v̂Ĝi,Ĝj (S5)"
},
{
"heading": "E ALGORITHM FOR HIERARCHICALLY DECODING STRUCTURED RNA",
"text": "Algorithm 1: DFS decode RNA secondary structure 1 Given: zT , zG , M TI, M SI a 2 Initialize: stack ← [ ] 3 function decode(zT , zG) 4 root← sample(MLPnode(zT )) ; 5 root.add incoming message(zT ) ; 6 stack.push((root, 0)) ; 7 t← 0 ; 8 while t ≤ M TI and stack.size() ≥ 1 do 9 c node, last nuc← stack.get last item();\n10 all msg ← {msg  ∀msg ∈ c node.get incoming message()} ; 11 local field← [c node.label()  c node.get segment features()] ; 12 new msg ← TGRU(local field, all msg) ;\n// topological prediction 13 is backtrack ← sample(MLPtopo(zT ))∗ ; // nucleotide segment prediction 14 new msg, last nuc, decoded segment, segment features← decode segment(new msg, last nuc, zT , zG ,M SI)∗ ; 15 c node.add decoded segment(decoded segment) ; 16 c node.add segment features(segment features) ; 17 if is backtrack = True then\n// backtrack to the parent node 18 c node.add incoming message(new msg) ; 19 p node, ← stack.get penultimate item(); 20 p node.add neighbor(c node) ; 21 stack.update penultimate item((p node, last nuc)); 22 stack.pop() ; 23 else\n// predict and expand to new tree node 24 new node← sample(MLPnode(new msg))∗ ; 25 new node.add incoming message(new msg) ; 26 new node.add neighbor((c node, last nuc)) ; 27 stack.push(new node) ; 28 end 29 t← t+ 1 ; 30 end 31 return root ;\naM TI refers to the threshold which set the maximum allowed number of topological prediction steps; M SI is another threshold to limit the length of each decoded nucleotide segment."
},
{
"heading": "F DETAILS FOR APPLYING RNA STRUCTURAL CONSTRAINTS TO LINEARIZED DECODING PROCEDURES",
"text": "When decoding from the joint vocabulary of sequence and dotbracket structure ({A,C,G,U} × {., (, )}), whenever a nucleotide nuci with a left bracket is sampled at step i, we append them to a stack, i.e. {(nuci0 , i0) . . . (nuci, i)}. Then, at decode step j,\n• if i − j ≤ 3, a proper mask will be added to the categorical logits of the vocabulary, to avoid sampling any nucleotides with right brackets, which means only an unpaired nucleotide or one that comes with a left bracket can be sampled;\n• if i− j > 3, a mask will be applied to make sure that only a nucleotide complementary to nuci can be sampled with the right bracket. Sampling nucleotides with other forms of structures are allowed.\nAs soon as a nucleotide with a closing right bracket is sampled, we pop out (nuci, i) from the stack. The special symbol for stop decoding can only be sampled when the stack has become empty."
},
{
"heading": "G DETAILS FOR APPLYING RNA STRUCTURAL CONSTRAINTS TO HIERARCHICAL DECODING PROCEDURES",
"text": "Additional constraints to be enforced during the hierarchical decoding process to ensure the validity of the decoded RNA secondary structure. Recall in section 3.2 that three types of predictions are involved with the hierarchical decoding, therefore, each type is associated with its own set of rules. All set of rules can be observed by adding proper masks to the categorical logits before sampling, which are detailed below.\nConstraints for making topological prediction, when the current node is\n• stem node, then the algorithm always expands to a new node upon its first visit, or backtracks to its parent node upon revisit;\n• hairpin node, then the algorithm always backtracks; • internal loop, then the algorithm acts similarly as for stem node; • multiloop, then the algorithm always expands upon first visit and the next revisit. Further revisits\nto the same multiloop node are not regulated.\nConstraints for predicting new tree node, when the current node is\n• stem node, then its child node when exists can be either a hairpin loop, an internal loop, or a multiloop;\n• hairpin node, internal loop or multiloop, then its child node must be a stem node.\nConstraints for decoding nucleotide segment. Due to the property of nonempty intersection between adjacent subgraphs, the start token for decoding a segment at the current node, is always the last nucleotide decoded at the last node. Therefore, without explicitly mentioning, the algorithm needs to decode at least one new nucleotide at each segment. When the current node is\n• stem node, and if it is upon its first visit (i.e. decoding the first segment of a stem), then there is no for constraints. Otherwise, upon its revisit, the algorithm needs to decode exactly the complementary bases and in the reverse order, according to the first decoded segment;\n• hairpin node, then the decoder needs to decode at least four nucleotides before seeing the stop symbol, unless the hairpin is also the root node.\n• internal loop node, and if it is upon its first, then constraint is not necessary. Otherwise, upon its revisit, the algorithm needs to decode at least one unpaired nucleotide on condition that the first decoded internal loop segment does not contain any unpaired nucleotides;\n• multiloop node, then there is no need for constraints."
},
{
"heading": "H DETAILS FOR PARAMETERIZING PRIOR DISTRIBUTION USING NORMALIZING FLOW",
"text": "A normalizing flow involves a series of bijective transformation with tractable Jacobian logdeterminant, to map an observed datapoint x ∼ pθ(x) from a complex distribution to a simpler one, such as the standard normal distribution.\nConsidering the simplified case where we have a single bijective function fθ : Z → X to map some simple latent variables z to observed datapoint x, then, using the change of variable theorem, the likelihood of the observed datapoint can be evaluated as:\npθ(x) = pz(f −1 θ (x))det\n∂f−1θ (x)\n∂x  (S6)\nwhere pz(.) denotes some simple base distribution, e.g. N (0; I). Then, it becomes clear the efficiency of this scheme heavily relies on the efficiency of inverting the forward mapping fθ as well as computing its Jacobian logdeterminant.\nIn this project, we use a type of continuous normalizing flow (CNF) which simplifies the above mentioned computation (Chen et al., 2018). Consider a time continuous dynamics fψ(z(t), t) of some intermediate data representation z(t), and again z(t0) ∼ pz(.), the transformation of variable, along with its inverse mapping, can be expressed as:\nz , z(t1) = z(t0) + ∫ t1 t0 fψ(z(t), t)dt (S7)\nz(t0) = z(t1) + ∫ t0 t1 fψ(z(t), t)dt (S8)\nand the change of probability density can be expressed as: log pψ(z) = log pz(z(t0))− ∫ t1 t0 tr( ∂fθ ∂z(t) )dt (S9)\nNote that the invertibility issue is no longer a concern under some mild constraints (Chen et al., 2018). Also, Eq. S9 only involves a more lightweight trace operation on the Jacobian rather than evaluating its logdeterminant.\nTherefore, we learn a parameterized prior using a CNF, and observe the decomposition of the KL term in the VAE objective:\nKL(qφ(zx)pψ(z)) = −Ez∼qφ(zx)[pψ(z)]−H[qφ(zx)] (S10)\nTherefore, during training our CNF parameterized with ψ works on the transformation of complex latent encodings z ∼ qφ(zx) to some simple z(t0) ∼ N (0; I), with an exact likelihood described by Eq. S9 and integrated into Eq. S10 for the complete training objective. During inference, we simply sample zt0 ∼ N (0; I), and use our CNF to reversely transform it to z ∼ pψ(.) which should be closer to the approximate posterior.\nOur specific parameterization of the CNF follows from Yang et al. (2019) and Grathwohl et al. (2019), interleaving two hidden concatsquash layers of dimensionality 256 with Tanh nonlinearity.\nI INFORMATION OF THE UNLABELED RNA DATASET\nFigure S2: This figure contains information of the unlabeled RNA dataset. (A) The number of hypernodes appears to grow linearly with the length of RNA, and (B) the junction tree height also grows as the length increases but on a more moderate scale. (C) and (D) have shown barplots of the number of hypernodes and tree height, indicating that the junction tree of RNA can take on significant depth hence contributing to the diversity and complexity of RNA secondary structures represented in this dataset."
},
{
"heading": "J HYPERPARAMETERS",
"text": "Table S1: Hyperparameters for training VAE and full classifier models. Note that hidden units refer to the dimensionality of encoders and decoders from LSTMVAE, GraphVAE as well as HierVAE models. Dropout is applied to the embedding MLP classifier in case of training semisupervised VAEs, which contains one hidden layer.\nfor VAE models\nlatent dimensionality 128 hidden units 512 GMPNN iterations 5 TGRU iterations 10 learning rate 1e3 batch size 32 optimizer AMSGrad (Reddi et al., 2018) dropout ratio 0.2 M TI 300 S TI (hierarchical decoder) 100 S TI (linearized decoder) 1000\nfor full classifier models (overriding some above hyperparameters)\nlearning rate 2e4 epochs 200 early stopping epochs 5"
},
{
"heading": "K RNACOMPETES CLASSIFIERS ON PRETRAINED AND FIXED VAE EMBEDDINGS",
"text": "Table S2: Performance of simple MLP classifiers on top of fixed latent embeddings from VAE models, which have been pretrained on the unlabeled RNA dataset as originally shown in Table 1.\nRBP LSTMVAE GraphVAE HierVAE\nHuR 0.867 0.858 0.860 PTB 0.886 0.878 0.883 QKI 0.748 0.756 0.746 Vts1 0.775 0.758 0.774 RBMY 0.734 0.725 0.731 SF2 0.867 0.862 0.866 SLBP 0.749 0.737 0.747"
},
{
"heading": "L ENDTOEND RNACOMPETES CLASSIFIERS",
"text": "Table S3: We use the same encoding architectures as in the generative models, and report their AUROC averaged across 6 runs, for each RNAcompeteS RBP dataset.\nRBP LSTMSeqOnly LSTM Graph Hierarchical\nHuR 0.880 ± 0.000 0.880 ± 0.000 0.880 ± 0.000 0.888 ± 0.002 PTB 0.900 ± 0.000 0.910 ± 0.000 0.910 ± 0.000 0.910 ± 0.000 QKI 0.820 ± 0.000 0.830 ± 0.000 0.825 ± 0.002 0.830 ± 0.000 Vts1 0.900 ± 0.000 0.908 ± 0.002 0.637 ± 0.079 0.910 ± 0.000 RBMY 0.905 ± 0.002 0.880 ± 0.003 0.802 ± 0.055 0.870 ± 0.002 SF2 0.890 ± 0.000 0.900 ± 0.000 0.900 ± 0.000 0.900 ± 0.000 SLBP 0.777 ± 0.002 0.790 ± 0.000 0.797 ± 0.002 0.797 ± 0.002"
},
{
"heading": "M ALTERNATIVE HIERVAE TRAINING ON RNACOMPETES",
"text": "Table S4: Training HierVAE on supervised RNAcompeteS dataset. All models are trained with 20 epochs, including 5 epochs for warmup, 6 epochs to linearly raise beta from 0 to 3e3, and 9 remaining epochs with beta fixed at 3e3. The test set measures AUROC and posterior decoding on the final model.\nTest Post R&S Post NR&D\nDataset AUROC Valid FE DEV RECON ACC Valid FE DEV RECON ACC\nHuR 0.871 100% 0.951 18.97% 99.34% 0.702 31.52% PTB 0.899 100% 0.826 21.17% 98.64% 0.674 31.28% QKI 0.822 100% 0.867 17.82% 99.40% 0.627 30.62% Vts1 0.874 100% 1.056 13.71% 99.39% 0.770 24.97% RBMY 0.872 100% 0.963 11.86% 98.68% 0.690 22.91% SF2 0.874 100% 0.921 14.44% 99.32% 0.668 25.99% SLBP 0.764 100% 1.033 14.84% 99.44% 0.743 26.93%"
},
{
"heading": "N COMPARISON OF GENERATED RNA SECONDARY STRUCTURES TO MFE STRUCTURES",
"text": "Figure S3: A comparison of generated RNAs (left) to their corresponding MFE structures (right). RNAs are generated with structural constraints from HierVAE on three random axes. The ground truth MFE structures are predicted by RNAfold, and the generated RNAs are shown to evolve smoothly in the latent space along with their corresponding MFE structures which have also shown relatively smooth transitions."
},
{
"heading": "O NEIGHBORHOOD VISUALIZATION OF A CYSTEINECARRYING TRANSFERRNA",
"text": "Figure S4: Neighborhood visualization of tRNACys6which is marked by the red bounding box in the center and the walk in the latent space takes place on two random orthogonal axes. Note that actual secondary structure of tRNACys plotted in the figure is different compared to the one deposited online due to the prediction of RNAfold.\n6https://rnacentral.org/rna/URS00001F47B5/9606"
},
{
"heading": "P NEIGHBORHOOD VISUALIZATION OF A 5S RIBOSOMAL RNA",
"text": "Figure S5: Neighborhood visualization of a 5S ribosomal RNA8which is marked by the red bounding box in the center and the walk in the latent space takes place on two random orthogonal axes. Note that actual secondary structure of this 5S ribosomal RNA plotted in the figure is different compared to the one deposited online due to the prediction of RNAfold.\n8https://rnacentral.org/rna/URS000075B93F/9606"
},
{
"heading": "Q TARGETED RNA GENERATION — AN EXAMPLE",
"text": "Figure S6: An example of searching novel structured RNAs with higher chance of binding to HuR. The optimization takes place in the latent space of HierVAE, starting from the initial encoding of a random RNA molecule in the test set, and at each step altering the latent encoding by using activation maximization on the embedding classifier. The trajectory of generated RNAs is shown in the order of left to right and top to bottom, and the field PRED indicates that the probability of binding, as predicted by another external full classifier on the decoded molecular structure, is overall increasing as the decoded RNA structures smoothly evolving."
}
]
 2,021
 RNA SECONDARY STRUCTURES

SP:0bd749fe44c37b521bd40f701e1428890aaa9c95
 [
"This paper presents a benchmark for discourse phenomena in machine translation. Its main novelty lies in the relatively large scale, spanning three translation directions, four discourse phenomena, and 1505000 data points per language and phenomenon. A relatively large number of systems from previous work is benchmarked on each test set, and agreement with human judgments is measured."
]
 Despite increasing instances of machine translation (MT) systems including extrasentential context information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like BLEU are not expressive or sensitive enough to capture quality improvements or drops that are minor in size but significant in perception. We introduce the first of their kind MT benchmark testsets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several competitive baseline MT systems on the curated datasets. Surprisingly, we find that the complex contextaware models that we test do not improve discourserelated translations consistently across languages and phenomena. Our evaluation benchmark is available as a leaderboard at <dipbenchmark1.github.io>.
 [
{
"affiliations": [],
"name": "MARKS FOR"
},
{
"affiliations": [],
"name": "DISCOURSE PHENOMENA"
}
]
 [
{
"authors": [
"Rachel Bawden",
"Rico Sennrich",
"Alexandra Birch",
"Barry Haddow"
],
"title": "Evaluating discourse phenomena in neural machine translation",
"venue": null,
"year": 2018
},
{
"authors": [
"Peter Bourgonje",
"Manfred Stede"
],
"title": "The potsdam commentary corpus 2.2: Extending annotations for shallow discourse parsing",
"venue": "In LREC,",
"year": 2020
},
{
"authors": [
"Marine Carpuat"
],
"title": "One translation per discourse",
"venue": "SEW@NAACLHLT,",
"year": 2012
},
{
"authors": [
"Mauro Cettolo",
"Niehues Jan",
"Stüker Sebastian",
"Luisa Bentivogli",
"R. Cattoni",
"Marcello Federico"
],
"title": "The iwslt 2016 evaluation campaign",
"venue": null,
"year": 2016
},
{
"authors": [
"Mauro Cettolo",
"Marcello Federico",
"Luisa Bentivogli",
"Niehues Jan",
"Stüker Sebastian",
"Sudoh Katsuitho",
"Yoshino Koichiro",
"Federmann Christian"
],
"title": "Overview of the iwslt 2017 evaluation campaign",
"venue": null,
"year": 2017
},
{
"authors": [
"Jacob Devlin",
"MingWei Chang",
"Kenton Lee",
"Kristina Toutanova. Bert"
],
"title": "Pretraining of deep bidirectional transformers for language understanding",
"venue": "arXiv preprint arXiv:1810.04805,",
"year": 2018
},
{
"authors": [
"Liane Guillou"
],
"title": "Improving pronoun translation for statistical machine translation",
"venue": "In EACL,",
"year": 2012
},
{
"authors": [
"Liane Guillou"
],
"title": "Analysing lexical consistency in translation",
"venue": "In Proceedings of the Workshop on Discourse in Machine Translation,",
"year": 2013
},
{
"authors": [
"Liane Guillou",
"Christian Hardmeier"
],
"title": "PROTEST: A test suite for evaluating pronouns in machine translation",
"venue": "In Proceedings of the Tenth International Conference on Language Resources and Evaluation",
"year": 2016
},
{
"authors": [
"Liane Guillou",
"Christian Hardmeier"
],
"title": "Automatic referencebased evaluation of pronoun translation misses the point",
"venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,",
"year": 2018
},
{
"authors": [
"Najeh Hajlaoui",
"Andrei PopescuBelis"
],
"title": "Assessing the accuracy of discourse connective translations: Validation of an automatic metric",
"venue": "In CICLing,",
"year": 2013
},
{
"authors": [
"Christian Hardmeier",
"Marcello Federico"
],
"title": "Modelling pronominal anaphora in statistical machine translation",
"venue": "In Proceedings of the 2010 International Workshop on Spoken Language Translation, IWSLT",
"year": 2010
},
{
"authors": [
"Hany Hassan",
"Anthony Aue",
"Chang Chen",
"Vishal Chowdhary",
"Jonathan R. Clark",
"Christian Federmann",
"Xuedong Huang",
"Marcin JunczysDowmunt",
"William Lewis",
"Mu Li",
"Shujie Liu",
"T.M. Liu",
"Renqian Luo",
"Arul Menezes",
"Tao Qin",
"Frank Seide",
"Xu Tan",
"Fei Tian",
"Lijun Wu",
"Shuangzhi Wu",
"Yingce Xia",
"Dongdong Zhang",
"Zhirui Zhang",
"Ming Zhou"
],
"title": "Achieving human parity on automatic chinese to english news",
"venue": "translation. ArXiv,",
"year": 2018
},
{
"authors": [
"Sepp Hochreiter",
"Jürgen Schmidhuber"
],
"title": "Long shortterm memory",
"venue": "Neural Computation,",
"year": 1997
},
{
"authors": [
"Kyle P. Johnson",
"Patrick Burns",
"John Stewart",
"Todd Cook"
],
"title": "Cltk: The classical language toolkit, 2014–2020",
"venue": "URL https://github.com/cltk/cltk",
"year": 2020
},
{
"authors": [
"Prathyusha Jwalapuram",
"Shafiq Joty",
"Irina Temnikova",
"Preslav Nakov"
],
"title": "Evaluating pronominal anaphora in machine translation: An evaluation measure and a test suite",
"venue": "EMNLPIJCNLP,",
"year": 2019
},
{
"authors": [
"Yunsu Kim",
"Thanh Tran",
"Hermann Ney"
],
"title": "When and why is documentlevel context useful in neural machine translation? ArXiv",
"venue": null,
"year": 1910
},
{
"authors": [
"Ekaterina LapshinovaKoltunski",
"Christian Hardmeier",
"Pauline Krielke"
],
"title": "ParCorFull: a parallel corpus annotated with full coreference",
"venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018),",
"year": 2018
},
{
"authors": [
"Samuel Läubli",
"Rico Sennrich",
"Martin Volk"
],
"title": "Has machine translation achieved human parity? a case for documentlevel evaluation",
"venue": "In EMNLP,",
"year": 2018
},
{
"authors": [
"Kazem LotfipourSaedi"
],
"title": "Lexical cohesion and translation equivalence",
"venue": null,
"year": 1997
},
{
"authors": [
"Sameen Maruf",
"Gholamreza Haffari"
],
"title": "Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
"venue": null,
"year": 2018
},
{
"authors": [
"Thomas Meyer",
"Andrei PopescuBelis",
"N. Hajlaoui",
"Andrea Gesmundo"
],
"title": "Machine translation of labeled discourse connectives",
"venue": "AMTA",
"year": 2012
},
{
"authors": [
"Lesly Miculicich",
"Dhananjay Ram",
"Nikolaos Pappas",
"James Henderson"
],
"title": "Documentlevel neural machine translation with hierarchical attention networks",
"venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,",
"year": 2018
},
{
"authors": [
"Lesly Miculicich Werlen",
"Andrei PopescuBelis"
],
"title": "Validation of an automatic metric for the accuracy of pronoun translation (APT)",
"venue": "In Proceedings of the Third Workshop on Discourse in Machine Translation,",
"year": 2017
},
{
"authors": [
"Han Cheol Moon",
"Tasnim Mohiuddin",
"Shafiq R. Joty",
"Xiaofei Chi"
],
"title": "A unified neural coherence model",
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,",
"year": 2019
},
{
"authors": [
"Jane Morris",
"Graeme Hirst"
],
"title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text",
"venue": "Computational Linguistics,",
"year": 1991
},
{
"authors": [
"Maria Nadejde",
"Alexandra Birch",
"Philipp Koehn"
],
"title": "Proceedings of the first conference on machine translation, volume 1: Research papers. The Association for Computational Linguistics, 2016",
"venue": null,
"year": 2016
},
{
"authors": [
"Myle Ott",
"Sergey Edunov",
"Alexei Baevski",
"Angela Fan",
"Sam Gross",
"Nathan Ng",
"David Grangier",
"Michael Auli"
],
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"venue": "In Proceedings of NAACLHLT 2019: Demonstrations,",
"year": 2019
},
{
"authors": [
"Kishore Papineni",
"Salim Roukos",
"Todd Ward",
"WeiJing Zhu"
],
"title": "Bleu: a method for automatic evaluation of machine translation",
"venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,",
"year": 2002
},
{
"authors": [
"Matthew Peters",
"Mark Neumann",
"Mohit Iyyer",
"Matt Gardner",
"Christopher Clark",
"Kenton Lee",
"Luke Zettlemoyer"
],
"title": "Deep contextualized word representations",
"venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,",
"year": 2018
},
{
"authors": [
"Emily Pitler",
"Ani Nenkova"
],
"title": "Revisiting readability: A unified framework for predicting text quality",
"venue": "In EMNLP,",
"year": 2008
},
{
"authors": [
"M. Popel",
"M. Tomková",
"J. Tomek",
"Łukasz Kaiser",
"Jakob Uszkoreit",
"Ondrej Bojar",
"Z. Žabokrtský"
],
"title": "Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals",
"venue": "Nature Communications,",
"year": 2020
},
{
"authors": [
"Rashmi Prasad",
"Bonnie L. Webber",
"Aravind K. Joshi"
],
"title": "Reflections on the penn discourse treebank, comparable corpora, and complementary annotation",
"venue": "Computational Linguistics,",
"year": 2014
},
{
"authors": [
"Rashmi Prasad",
"Bonnie L. Webber",
"Alan Lee"
],
"title": "Annotation in the pdtb : The next generation",
"venue": null,
"year": 2018
},
{
"authors": [
"Rico Sennrich"
],
"title": "Why the time is ripe for discourse in machine translation",
"venue": "http://homepages. inf.ed.ac.uk/rsennric/wnmt2018.pdf,",
"year": 2018
},
{
"authors": [
"Rico Sennrich",
"Barry Haddow",
"Alexandra Birch"
],
"title": "Neural machine translation of rare words with subword units",
"venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016).,",
"year": 2016
},
{
"authors": [
"Karin Sim Smith",
"Wilker Aziz",
"Lucia Specia"
],
"title": "A proposal for a coherence corpus in machine translation",
"venue": "In DiscoMT@EMNLP,",
"year": 2015
},
{
"authors": [
"Karin Sim Smith",
"Wilker Aziz",
"Lucia Specia"
],
"title": "The trouble with machine translation coherence",
"venue": "In Proceedings of the 19th Annual Conference of the European Association for Machine Translation,",
"year": 2016
},
{
"authors": [
"Swapna Somasundaran",
"Jill Burstein",
"Martin Chodorow"
],
"title": "Lexical chaining for measuring discourse coherence quality in testtaker essays",
"venue": "In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers,",
"year": 2014
},
{
"authors": [
"Jörg Tiedemann",
"Yves Scherrer"
],
"title": "Neural machine translation with extended context",
"venue": "In Proceedings of the Third Workshop on Discourse in Machine Translation,",
"year": 2017
},
{
"authors": [
"Jörg Tiedemann"
],
"title": "Parallel data, tools and interfaces in opus",
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12),",
"year": 2012
},
{
"authors": [
"Ashish Vaswani",
"Noam Shazeer",
"Niki Parmar",
"Jakob Uszkoreit",
"Llion Jones",
"Aidan N Gomez",
"Ł ukasz Kaiser",
"Illia Polosukhin"
],
"title": "Attention is all you need",
"venue": "Advances in Neural Information Processing Systems",
"year": 2017
},
{
"authors": [
"Elena Voita",
"Pavel Serdyukov",
"Rico Sennrich",
"Ivan Titov"
],
"title": "Contextaware neural machine translation learns anaphora resolution",
"venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,",
"year": 2018
},
{
"authors": [
"Elena Voita",
"Pavel Serdyukov",
"Rico Sennrich",
"Ivan Titov"
],
"title": "Contextaware neural machine translation learns anaphora resolution",
"venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),",
"year": 2018
},
{
"authors": [
"Elena Voita",
"Rico Sennrich",
"Ivan Titov"
],
"title": "When a Good Translation is Wrong in Context: ContextAware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion",
"venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence,",
"year": 2019
},
{
"authors": [
"W. Wagner. Steven"
],
"title": "bird, ewan klein and edward loper: Natural language processing with python, analyzing text with the natural language toolkit",
"venue": "Language Resources and Evaluation,",
"year": 2010
},
{
"authors": [
"KayYen Wong",
"Sameen Maruf",
"Gholamreza Haffari"
],
"title": "Contextual neural machine translation improves translation of cataphoric pronouns",
"venue": "In ACL,",
"year": 2020
},
{
"authors": [
"H. Xiong",
"Zhongjun He",
"Hua Wu",
"H. Wang"
],
"title": "Modeling coherence for discourse neural machine translation",
"venue": "In AAAI,",
"year": 2019
},
{
"authors": [
"Jiacheng Zhang",
"Huanbo Luan",
"Maosong Sun",
"Feifei Zhai",
"Jingfang Xu",
"Min Zhang",
"Yang Liu"
],
"title": "Improving the transformer translation model with documentlevel context",
"venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,",
"year": 2018
},
{
"authors": [
"Y. Zhou",
"N. Xue"
],
"title": "The chinese discourse treebank: a chinese corpus annotated with discourse relations",
"venue": "Language Resources and Evaluation,",
"year": 2015
},
{
"authors": [
"Michał Ziemski",
"Marcin JunczysDowmunt",
"Bruno Pouliquen"
],
"title": "The united nations parallel corpus v1.0",
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016),",
"year": 2016
}
]
 [
{
"heading": "1 INTRODUCTION AND RELATED WORK",
"text": "The advances in neural machine translation (NMT) systems have led to great achievements in terms of stateoftheart performance in automatic translation tasks. There have even been claims that their translations are no worse than what an average bilingual human may produce (Wu et al., 2016) or that the translations are on par with professional translators (Hassan et al., 2018). However, extensive studies conducting evaluations with professional translators (Läubli et al., 2018; Popel et al., 2020) have shown that there is a statistically strong preference for human translations in terms of fluency and overall quality when evaluations are conducted monolingually or at the document level.\nDocument (or discourse) level phenomena (e.g., coreference, coherence) may not seem lexically significant, but contribute significantly to readability and understandability of the translated texts (Guillou, 2012). Targeted datasets for evaluating phenomena like coreference (Guillou et al., 2014; Guillou & Hardmeier, 2016; LapshinovaKoltunski et al., 2018; Bawden et al., 2018; Voita et al., 2018b), or ellipsis and lexical cohesion (Voita et al., 2019), have been proposed.\nThe NMT framework such as the Transformer (Vaswani et al., 2017) provides more flexibility to incorporate larger context. This has spurred a great deal of interest in developing contextaware NMT systems that take advantage of source or target contexts, e.g., Miculicich et al. (2018), Maruf & Haffari (2018), Voita et al. (2018b; 2019), Xiong et al. (2019), Wong et al. (2020), to name a few.\nMost studies only report performance on specific testsets, often limited to improvements in BLEU (Papineni et al., 2002). Despite being the standard MT evaluation metric, BLEU has been criticised for its inadequacy; the scores are not interpretable, and are not sensitive to small improvements in lexical terms that may lead to big improvements in fluency or readability (Reiter, 2018). There is no framework for a principled comparison of MT quality beyond mere lexical matching as done in BLEU: there are no standard corpora and no agreedupon evaluation measures.\nTo address these shortcomings, we propose the DiP benchmark tests (for Discourse Phenomena), that will enable the comparison of machine translation models across discourse task strengths and source languages. We create diagnostic testsets for four diverse discourse phenomena, and also propose automatic evaluation methods for these tasks. However, discourse phenomena in translations can be tricky to identify, let alone evaluate. A fair number of datasets proposed thus far have been manually curated, and automatic evaluation methods have often failed to agree with human\njudgments (Guillou & Hardmeier, 2018). To mitigate these issues, we use trained neural models for identifying and evaluating complex discourse phenomena and conduct extensive user studies to ensure agreements with human judgments. Our methods for automatically extracting testsets can be applied to multiple languages, and find cases that are difficult to translate without having to resort to synthetic data. Moreover, our testsets are extracted in a way that makes them representative of current challenges. They can be easily updated to reflect future challenges, preventing the pitfall of becoming outdated, which is a common failing of many benchmarking testsets.\nWe also benchmark established MT models on these testsets to convey the extent of the challenges they pose. Although discourse phenomena can and do occur at the sentencelevel (e.g., between clauses), we would expect MT systems that model extrasentential context (Voita et al., 2018b; Zhang et al., 2018; Miculicich et al., 2018) to be more successful on these tasks. However, we observe significant differences in system behavior and quality across languages and phenomena, emphasizing the need for more extensive evaluation as a standard procedure. We propose to maintain a leaderboard that tracks and highlights advances in MT quality that go beyond BLEU improvement.\nOur main contributions in this paper are as follows:\n• Benchmark testsets for four discourse phenomena: anaphora, coherence & readability, lexical consistency, and discourse connectives.\n• Automatic evaluation methods and agreements with human judgments. • Benchmark evaluation and analysis of four contextaware systems contrasted with baselines, for\nGerman/Russian/ChineseEnglish language pairs."
},
{
"heading": "2 MACHINE TRANSLATION MODELS",
"text": "Model Architectures. We first introduce the MT systems that we will be benchmarking on our testsets. We evaluate a selection of established models of various complexities (simple sentencelevel to complex contextaware models), taking care to include both source and targetside contextaware models. We briefly describe the model architectures here:\n• S2S: A standard 6layer base Transformer model (Vaswani et al., 2017) which translates sentences independently.\n• CONCAT: A 6layer base Transformer whose input is two sentences (previous and current sentence) merged, with a special character as a separator (Tiedemann & Scherrer, 2017).\n• ANAPH: Voita et al. (2018b) incorporate source context by encoding it with a separate encoder, then fusing it in the last layer of a standard Transformer encoder using a gate. They claim that their model explicitly captures anaphora resolution.\n• TGTCON: To model targetcontext, we implement a version of ANAPH with an extra operation of multihead attention in the decoder, computed between representations of the target sentence and target context. The architecture is described in detail in the Appendix (A.5).\n• SAN: Zhang et al. (2018) use source attention network: a separate Transformer encoder to encode source context, which is incorporated into the source encoder and target decoder using gates.\n• HAN: Miculicich et al. (2018) introduce a hierarchical attention network (HAN) into the Transformer framework to dynamically attend to the context at two levels: word and sentence. They achieve the highest BLEU when hierarchical attention is applied separately to both the encoder and decoder.\nDatasets and Training. The statistics for the datasets used to train the models are shown in Table 1. We tokenize the data using Jieba1 for Zh and Moses scripts2 for the other languages, lowercase the text, and apply BPE encodings3 from Sennrich et al. (2016). We learn the BPE encodings with the command learnjointbpeandvocab s 40000. The scores reported are BLEU4, computed either through fairseq or NLTK (Wagner, 2010). Further details about dataset composition, training settings and hyperparameters can be found in the Appendix (A.7).\n1https://github.com/fxsjy/jieba 2https://www.statmt.org/moses/ 3https://github.com/rsennrich/subwordnmt/\nBLEU scores. The BLEU scores on the WMT14 (DeEn, RuEn) and on the WMT17 (ZhEn) testsets for each of the six trained models are shown in Table 2. We were unable to train HAN for ZhEn as the model was not optimized for training with large datasets. In contrast to increases in BLEU for selected languagepairs and datasets reported in published work, incorporating context within elaborate contextdependent models decreases BLEU scores for the ZhEn and DeEn tasks. However, the simple concatenationbased model CONCAT performs better than S2S for DeEn and RuEn; this shows that context knowledge is indeed helpful for improving BLEU."
},
{
"heading": "3 BENCHMARK TESTSETS",
"text": "We construct our benchmarking testsets based on four main principles:\nSelectivity. The testsets need to provide hard to translate contexts for MT models. We ensure this by looking at translation errors made by system submissions to campaigns like WMT and IWSLT. Authenticity. The testsets cannot contain artificial or synthetic data but only natural text. Rather than generating testset samples using heuristics, we extract hard contexts from existing humangenerated source text. Multilinguality. The testset extraction method should be automatic and applicable to multiple languages. Our framework can be used to extract testsets for all source languages that are part of the considered MT campaigns. Adaptability. The testsets should be easy to update frequently, making them adaptable to improvements in newer systems. Since we automatically extract hard contexts based on MT errors, our testsets are easy to update; they adapt to errors in newer (and possibly more accurate) systems, making the tasks harder over time.\nWe use the system outputs released by WMT and IWSLT for the most recent years (Nadejde et al., 2016; Bojar et al., 2017; 2018; 2019; Cettolo et al., 2016; 2017) to build our testsets. For DeEn,\nRuEn and ZhEn, these consist of translation outputs from 68, 41 and 47 unique systems respectively. Since the data comes from a wide variety of systems, our testsets representatively aggregate different types of errors from several (arguably SOTA) models. Also note that the MT models we are benchmarking are not a part of these system submissions to WMT, so there is no potential bias in the testsets."
},
{
"heading": "3.1 ANAPHORA",
"text": "Anaphora are references to entities that occur elsewhere in a text; mishandling them can result in ungrammatical sentences or the reader inferring the wrong antecedent, leading to misunderstanding of the text (Guillou, 2012). We focus specifically on the aspect of incorrect pronoun translations.\nTestset. To obtain hard contexts for pronoun translation, we look for source texts that lead to erroneous pronoun translations in system outputs. We align the system translations with their references, and collect the cases in which the translated pronouns do not match the reference.4\nOur anaphora testset is an updated version of the one proposed by Jwalapuram et al. (2019). We filter the system translations based on their list of cases where the translations can be considered wrong, rather than acceptable variants. The corresponding source texts are extracted as a test suite for pronoun translation. This gives us a pronoun benchmark testset of 2564 samples for DeEn, 2368 for RuEn and 1540 for ZhEn.\nEvaluation. Targeted evaluation of pronouns in MT has been challenging as it is not fair to expect an exact match with the reference. Evaluation methods like APT (Miculicich Werlen & PopescuBelis, 2017) or AutoPRF (Hardmeier & Federico, 2010) are specific to language pairs or lists of pronouns, requiring extensive manual intervention. They have also been criticised for failing to produce evaluations that are consistent with human judgments (Guillou & Hardmeier, 2018).\nJwalapuram et al. (2019) propose a pairwise ranking model that scores “good\" pronoun translations (like in the reference) higher than “poor\" pronoun translations (like in the MT output) in context, and show that their model is good at making this distinction, along with having high agreements with human judgements. However, they do not rank multiple system translations against each other, which is our main goal; the absolute scores produced by their model are not useful since it is trained in a pairwise fashion.\nWe devise a way to use their model to score and rank system translations in terms of pronouns. First, we retrain their model with more uptodate WMT data (more details in Appendix A.1). We obtain a score for each benchmarked MT system (S2S, CONCAT, etc.) translation using the model, plus the corresponding reference sentence. We then normalize the score for each translated sentence by calculating the difference with the reference. To get an overall score for an MT system, the assigned scores are summed across all sentences in the testset.\nScoresys = ∑ i ρi(refθ)− ρi(sysθ) (1)\nwhere ρi(.θ) denotes the score given to sentence i by the pronoun model θ. The systems are ranked based on this overall score, where a lower score indicates a better performance. We conduct a user study to confirm that the model rankings correspond with human judgments, obtaining an agreement of 0.91 between four participants who annotated 100 samples. Appendix A.1 gives details (e.g., interface, participants, agreement) about the study."
},
{
"heading": "3.1.1 RESULTS AND ANALYSIS",
"text": "The ranking results obtained from evaluating the MT systems on our pronoun benchmark testset using our evaluation measure are given in Table 4 (first two columns). We also report common pronoun errors for each model based on our manual analysis (last three columns). Specifically, we observed the following types of errors in our analysis of a subset of the translation data:\n(i) Gender copy. Translating from De/Ru to En often requires ‘flattening’ of gendered pronouns to it, since De/Ru assign gender to all nouns. In many cases, machine translated pronouns tend to (mistakenly) agree with the source language. For example, diese Wohnung in Earls Court..., und sie\n4This process requires the pronouns in the target language to be separate morphemes, as in English."
},
{
"heading": "1 HAN 31 48 21",
"text": ""
},
{
"heading": "2 ANAPH 29 46 25",
"text": ""
},
{
"heading": "3 CONCAT 29 46 25",
"text": ""
},
{
"heading": "4 SAN 32 44 24",
"text": "hatte... is translated to : apartment in Earls Court, and she had..., which keeps the female gender expressed in sie, instead of translating it to it. (ii) Named entity. A particularly hard problem is to infer gender from a named entity, e.g., Lady Liberty...She is meant to... she is wrongly translated to it. Such examples demand higher inference abilities (e.g., distinguish male/female names). (iii) Language specific phenomena. In Russian and Chinese, pronouns are often dropped  sentences become ungrammatical in English without them. Pronouns can also be ambiguous in the source language; e.g., in German, the pronoun sie can mean both she and you, depending on capitalization, sentence structure, and context.\nOverall, we observe that the advantages of contextual models are not consistent across languages. They seem to use context well in RuEn, but fail to outperform S2S or CONCAT in DeEn, while ZhEn is inconclusive. The TGTCON model is consistently poor in this task. The partial success of the S2S model can be explained by its tendency to use it as the default pronoun, which statistically appears most often due to the lack of grammatical gender in English. More variability in pronouns occurs in the outputs of the contextaware models, but this does not contribute to a greater success."
},
{
"heading": "3.2 COHERENCE AND READABILITY",
"text": "Pitler & Nenkova (2008) define coherence as the ease with which a text can be understood, and view readability as an equivalent property that indicates whether it is wellwritten.\nTestset. To test for coherence and readability, we try to find documents that can be considered hard to translate. We use the coherence model proposed by Moon et al. (2019), which is trained in a pairwise ranking fashion on WSJ articles, where a negative document is formed by shuffling the sentences of an original (positive) document. It models syntax, intersentence coherence relations and global topic structures. It has been shown in some studies that MT outputs are incoherent (Smith et al., 2015; 2016; Läubli et al., 2018). We thus retrain the coherence model with reference translations as positive and MT outputs as negative documents to better capture the coherence issues that are present in MT outputs (more details in Appendix A.2). We use older WMT submissions from 20112015 to ensure that the training data does not overlap with the benchmark testset data.\nThe coherence model takes a system translation (multisentential) and its reference as input and produces a score for each. Similar to Eq. 1, we consider the difference between the scores produced by the model for the reference and the translated text as the coherence score for the translated text.\nFor a given source text (document) in the WMT testsets, we obtain the coherence scores for each of the translations (i.e., WMT/IWSLT submissions) and average them. The source texts are sorted based on the average coherence scores of their translations. The texts that have lower average coherence scores can be considered to have been hard to translate coherently. We extract the source texts with scores below the median. These source texts form our benchmark testset for coherence and readability. This yields 272 documents (5,611 sentences) for DeEn, 330 documents (4,427 sentences) for RuEn and 210 documents (3,050 sentences) for ZhEn.\nEvaluation. Coherence and readability is also a hard task to evaluate, as it can be quite subjective. We resort to modelbased evaluation here as well, to capture the different aspects of coherence in translations. We use our retrained coherence model to score the benchmarked MT system translations and modify the scores for use in the same way as the anaphora evaluation (Eq. 1) to obtain a\nrelative ranking. As mentioned before (Sec. 3), the benchmarked MT systems do not overlap with the WMT system submissions, so there is no potential bias in evaluation since the testset extraction and the evaluation processes are independent. To confirm that the model does in fact produce rankings that humans would agree with, and to validate our model retraining, we conduct a user study, and obtain an agreement of 0.82 between three participants who annotated 100 samples. More details about the study can be found in the Appendix (A.2).\n3.2.1 RESULTS AND ANALYSIS\nTable 5: Coherence and Readability evaluation: Rankings of the different models for each language pair, obtained from our evaluation procedure.\nRk DeEn RuEn ZhEn\n1 CONCAT ANAPH ANAPH 2 SAN CONCAT CONCAT 3 S2S TGTCON S2S 4 ANAPH SAN TGTCON 5 TGTCON S2S SAN 6 HAN HAN \nWe identified some frequent coherence and readability errors (more examples in Appendix A.8):\n(i) Inconsistency. As in (Somasundaran et al., 2014), we observe that inconsistent translation of words across sentences (in particular named entities) breaks the continuity of meaning.\n(ii) Translation error. Errors at various levels spanning from ungrammatical fragments to model hallucinations introduce phrases which bear little relation to the whole text (Smith et al., 2016):\nReference: There is huge applause for the Festival Orchestra, who appear on stage for the first time – in casual leisurewear in view of the high heat. Translation: There is great applause for the solicitude orchestra , which is on the stage for the first time, with the heat once again in the wake of an empty leisure clothing.\nFrom the rankings in Table 5, we can see that ANAPH is the most coherent model for ZhEn and RuEn but performs poorly in DeEn, similar to the pronoun benchmark. Generally CONCAT is better than complex contextual models in this task."
},
{
"heading": "3.3 LEXICAL CONSISTENCY",
"text": "Lexical consistency in translation was first defined as ‘one translation per discourse’ by Carpuat (2009), i.e., the translation of a particular source word consistently to the same target word in that context. Guillou (2013) analyze different humangenerated texts and conclude that human translators tend to maintain lexical consistency to support the important elements in a text. The consistent usage of lexical items in a discourse can be formalized by computing the lexical chains (Morris & Hirst, 1991; LotfipourSaedi, 1997).\nTestset. To extract a testset for lexical consistency evaluation, we first align the translations from the system submissions with their references. In order to get a reasonable lexical chain formed by a consistent translation, we consider translations of blocks of 35 sentences in which the (lemmatized) word we are considering occurs at least twice in the reference. For each such word, we check if the corresponding system translation produces the same (lemmatized) word at least once, but fewer than the number of times the word occurs in the reference. In such cases, the system translation has failed to be lexically consistent in translation (see Table 3 for an example). We limit the errors considered to nouns and adjectives. The source texts of these cases form the benchmark testset. This gives us a testset with 618 sets (i.e., text blocks) for DeEn (3058 sentences), 732 sets for RuEn (3592 sentences) and 961 sets for ZhEn (4683 sentences).\nEvaluation. For lexical consistency, we adopt a simple evaluation method. For each block of 35 sentences, we either have a consistent translation of the word in focus, or the translation is inconsistent. We simply count the instances of consistency and rank the systems based on the percentage. Model translations are considered lexically inconsistent if at least one translation of a particular word matches the reference translation, but this translated word occurs fewer times than in the reference. For samples where no translations match the reference, we cannot be sure about inconsistency, since a synonym of the reference translation could have been used consistently. Therefore, we do not consider them for calculating the percentage used for the main ranking, but we report the consistency percentage as a fraction of the full testset for comparison (further discussion in Appendix (A.3))."
},
{
"heading": "Rk Model %Con %Full Syn Rel Om NE Rd",
"text": ""
},
{
"heading": "1 ANAPH 30.94 17.48 43 14 29 0 14",
"text": ""
},
{
"heading": "2 TGTCON 29.84 16.02 14 0 29 43 14",
"text": ""
},
{
"heading": "3 S2S 28.27 18.21 0 25 38 25 13",
"text": ""
},
{
"heading": "4 CONCAT 28.17 16.65 0 66 0 16 33",
"text": ""
},
{
"heading": "5 SAN 26.33 13.42 0 13 38 25 25",
"text": ""
},
{
"heading": "3.3.1 RESULTS AND ANALYSIS",
"text": "The rankings of the MT systems based on the percentage of samples with consistent translations on the lexical consistency benchmark testsets are given in Table 6 (first four columns), along with our findings from a manual analysis on a subset of the translations (last five columns). Our manual inspection of the lexical chains shows the following tendencies:\n(i) Synonyms & related words. Words are exchanged for their synonyms (poll  survey), hypernyms/hyponyms (ambulance  car) or related concepts (wine  vineyard).\n(ii) Named entities. Models tend to distort proper names and translate them inconsistently. For example, the original name Füchtorf (name of a town) gets translated to feedingcommunity.\n(iii) Omissions. Occurs when words are omitted altogether from the lexical chain.\nThe overall low quality of Russian translations contributes to the prevalence of Random translations, and the necessity to transliterate named entities increases NE errors for both RuEn and ZhEn. Here we see some complex contextual models performing well; TGTCON leads the board across DeEn, RuEn and ZhEn, with ANAPH performing similarly well for RuEn and ZhEn. Generally, we should be seeing a consistent advantage for targetside context models, which should be able to “remember” their own translation of a word from previous sentences; however this only materializes for TGTCON and not for HAN."
},
{
"heading": "3.4 DISCOURSE CONNECTIVES",
"text": "Discourse connectives are used to link the contents of texts together by signaling coherence relations that are essential to the understanding of the texts (Prasad et al., 2014). Failing to translate a discourse connective correctly can result in texts that are hard to understand or ungrammatical. Finding errors in discourse connective translations can be quite tricky, since there are often many acceptable variants. To mitigate confusion, we limit the errors we consider in discourse connectives to the setting where the reference contains a connective but the translations fail to produce any (see Table 3 for an example).\nUser Study. To confirm that missing connectives are problematic, we conduct a user study. Participants are shown two previous sentences from the reference for context, and asked to choose between two candidate options for the sentence that may follow. These options consist of the reference translation which includes a connective, and an MT output that is missing the connective translation. Participants are asked to choose the sentence which more accurately conveys the intended meaning. See Figure 4b in Appendix A.4 for an example interface.\nWe obtain an agreement of 0.82 between two participants who annotated 200 samples, that translations with connectives are preferred. If the MT outputs with missing connectives were structured in such a way as to have implicit discourse relations, the agreements that favoured the references should be significantly lower. However, the strong agreements favouring the reference with the con"
},
{
"heading": "1 ANAPH 49.42 75.58 75 25 0",
"text": ""
},
{
"heading": "2 SAN 48.25 72.67 67 33 0",
"text": ""
},
{
"heading": "3 TGTCON 48.25 72.09 40 53 6",
"text": ""
},
{
"heading": "4 S2S 47.67 73.84 76 24 0",
"text": ""
},
{
"heading": "1 ANAPH 40.81 68.70 63 30 7",
"text": ""
},
{
"heading": "2 S2S 37.41 73.47 59 28 12",
"text": ""
},
{
"heading": "3 TGTCON 36.05 63.95 73 19 8",
"text": ""
},
{
"heading": "4 SAN 35.37 60.54 62 28 9",
"text": "nective indicate that the missing connectives in MT outputs are indeed an issue. More details about the study can be found in the Appendix (A.4).\nTestset. It would not be appropriate to simply extract connectives using a list of candidates, since those words may not always act in the capacity of a discourse connective. In order to identify the discourse connectives, we build a simple explicit connective classifier (a neural model) using annotated data from the Penn Discourse Treebank or PDTB (Prasad et al., 2018) (details in Appendix A.4). The classifier achieves an average crossvalidation F1 score of 93.92 across the 25 sections of PDTBv3, proving that it generalizes well.\nAfter identifying the explicit connectives in the reference translations, we align them with the corresponding system translations and extract the source texts of cases with missing connective translations. We only use the classifier on the reference text, but consider all possible candidates in the system translations to give them the benefit of the doubt. This gives us a discourse connective benchmark testset with 172 samples for DeEn, 147 for RuEn and 362 for ZhEn.\nEvaluation. There has been some work on semiautomatic evaluation of translated discourse connectives (Meyer et al., 2012; Hajlaoui & PopescuBelis, 2013); however, it is limited to only EnFr, based on a dictionary list of equivalent connectives, and requires using potentially noisy alignments and other heuristics. In the interest of evaluation simplicity, we expect the model to produce the same connective as the reference. Since the nature of the challenge is that connectives tend to be omitted altogether, we report both the accuracy of connective translations with respect to the reference, and the percentage of cases where any candidate connective is produced."
},
{
"heading": "3.4.1 RESULTS AND ANALYSIS",
"text": "The rankings of MT systems based on their accuracy of connective translations are given in Table 7, along with our findings from a manual analysis on a subset of the translations. In benchmark outputs, we observed mostly omissions of connectives (disappears in the translation), synonymous translations (e.g., Naldo is also a great athlete on the bench  Naldo’s “great sport\" on the bank, too.), and mistranslations. More examples can be found in the Appendix (A.8).\nThe ranking shows that the S2S model performs well for RuEn and ZhEn but not for DeEn. ANAPH continues its high performance in RuEn and this time also DeEn, doing poorly for ZhEn, while HAN is consistently poor with a lot of omissions."
},
{
"heading": "4 DISCUSSION",
"text": "Our benchmarking reemphasizes the gap between BLEU scores and translation quality at the discourse level. The overall BLEU scores for DeEn and RuEn are higher than the BLEU scores for ZhEn; however, we see that ZhEn models have higher accuracies in the discourse connective task, and also outperform RuEn in lexical consistency. Similarly, for RuEn, both SAN and HAN have higher BLEU scores than the S2S and CONCAT models, but are unable to outperform these simpler models consistently in the discourse tasks, often ranking last.\nWe also reveal a gap in performance consistency across language pairs. Models may be tuned for a particular language pair, such as ANAPH trained for EnRu (Voita et al., 2018a). For the same language pair (RuEn), we show results consistent with what is reported; the model ranks first or second for all phenomena. However, it is not consistently successful in other languages, e.g., ranking\nclose to bottom for almost all cases in DeEn. In general, our findings match the conclusions from Kim et al. (2019) regarding the lack of satisfactory performance gains in contextaware models.\nAlthough our testsets and evaluation procedures have their limitations, like only checking for missing connectives or being unable to detect consistently translated synonyms of reference translations, they are a first step toward a standardized, comprehensive evaluation framework for MT models that spans multiple languages. They are useful for measuring basic model proficiency, performance consistency and for discovering MT deficiencies. Discourseaware models have been advocated for improving MT (Sennrich, 2018); as more models are proposed, our framework will be a valuable resource that provides a better picture of model capabilities. With advances in NMT models and also in evaluation models for complex phenomena, harder challenges can be added and evaluated."
},
{
"heading": "5 GENERALIZABILITY TO OTHER LANGUAGES",
"text": "Procedures used to create our testsets can be generalized to create testsets in other languages. We briefly describe the possibilities here:\n• Anaphora: The pronouns need to be separate morphemes (and not attached to verbs etc.). If there are several equivalent pronoun translations, a list may be needed so they can be excluded from being considered translation errors; e.g., Miculicich Werlen & PopescuBelis (2017) has such a list for French, a list can also be collected through user studies as in Jwalapuram et al. (2019).\n• Coherence & Readability: The coherence model (Moon et al., 2019) used to find poorly translated texts was retrained on reference vs. MT outputs. It is also possible to do this for other languages for which WMT (or IWSLT) system outputs are available. The coherence model from Moon et al. (2019) is an endtoend neural model that does not rely on any languagespecific features, and thus can be trained on any target language. However, languagespecific or multilingual coherence models could also be used since Moon et al. (2019) primarily train and test their model on English (WSJ) data.\n• Lexical Consistency: A lemmatizer was used to reduce common suffixes for detecting lexical consistency (e.g., “box” and “boxes” should not be detected as inconsistent words), so a similar tool will be needed for any other target language; e.g., CLTK (Johnson et al., 2014–2020) provides a lemmatizer for several languages.\n• Discourse Connectives: Discourse connectives also need to be separate morphemes. We built a classifier trained on PDTB data to identify connectives since they are ambiguous in English. Datasets analogous to PDTB in other languages e.g., PCC (German) (Bourgonje & Stede, 2020) and CDTB (Chinese) (Zhou & Xue, 2015), etc. are available."
},
{
"heading": "6 CONCLUSIONS",
"text": "We presented the first of their kind discourse phenomena based benchmarking testsets called the DiP tests, designed to be challenging for NMT systems. Our main goal is to emphasize the need for comprehensive MT evaluations across phenomena and language pairs, which we do by highlighting the performance inconsistencies of complex contextaware models. We will release the discourse benchmark testsets and evaluation frameworks for public use, and also propose to accept translations from MT systems to maintain a leaderboard for the described phenomena."
},
{
"heading": "1 CONCAT 31.96 112.583",
"text": ""
},
{
"heading": "2 S2S 31.65 113.783",
"text": ""
},
{
"heading": "3 SAN 29.32 117.838",
"text": ""
},
{
"heading": "4 HAN 29.69 118.067",
"text": ""
},
{
"heading": "1 HAN 25.11 160.411",
"text": ""
},
{
"heading": "2 ANAPH 27.66 164.603",
"text": ""
},
{
"heading": "3 CONCAT 24.56 168.092",
"text": ""
},
{
"heading": "4 SAN 24.34 176.143",
"text": "A.2 COHERENCE\nRetrained model. We retrain the pairwise coherence model in Moon et al. (2019) to suit the MT setting, with reference translations as the positive documents and the MT outputs as the negative documents. The results are shown in Table 10.\nUser study. Figure 3 shows our user study interface. The participants are shown three candidate English translations of the same source text, and asked to rank the texts on how coherent and readable they are. To optimize annotation time, participants are only shown the first four sentences of the document; they annotate 100 such samples. We also include the reference as one of the candidates for control, and to confirm that we are justified in retraining the evaluation model to assign a higher score to the reference. Three participants took part in the study. Our control experiment results in an AC1 agreement of 0.84.\nThe agreement between the human judgements and the rankings obtained by using the original coherence model trained on permuted WSJ articles (also news domain, like the WMT data), is 0.784. The fact that the original model performs no worse than 0.784 shows that there are definitely coherence issues in such (MT output vs reference) data that are being picked up.\nThe agreement between the human judgements and the retrained coherence evaluation model’s rankings is 0.82. Our retrained model is therefore also learning useful taskspecific features in addition"
},
{
"heading": "1 CONCAT 31.96 5038.057",
"text": ""
},
{
"heading": "2 SAN 29.32 5059.811",
"text": ""
},
{
"heading": "3 S2S 31.65 5120.633",
"text": ""
},
{
"heading": "4 ANAPH 29.94 5166.320",
"text": "to general coherence features. The high agreement validates our proposal to use the modified coherence model to evaluate the benchmarked MT systems.\nResults. The total assigned scores (difference between reference and translation scores) obtained for each system after summing the over the samples in the respective testsets are given in Table 11. The models are ranked based on these scores from lowest score (best performing) to highest score (worst performing).\nA.3 LEXICAL CONSISTENCY\nDataset extraction. One possible issue with our method could be that reference translations may contain forced consistency, i.e., human translators introduce consistency to make the text more readable, despite inconsistent word usage in the source. It may not be reasonable to expect consistency in a system translation if there is none in the source. To confirm, we conducted a manual analysis where we compared the lexical chains of nouns and adjectives in Russian source texts against the lexical chains in the English reference. We find that in a majority (77%) of the cases, the lexical chains in the source are reflected accurately in the reference, and there are relatively few cases where humans force consistency.\nEvaluation. It is possible that the word used in the system translation is not the same as the word in the reference, but the MT output is still consistent (e.g., a synonym used consistently). We tried to use alignments coupled with similarity obtained from ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) embeddings to evaluate such cases to avoid unfairly penalizing the system translations, but we found this to be noisy and unreliable. Thus, we check consistency against the reference; if at least one translation matches the reference but the translated word occurs fewer times than it does in the reference, the translation is considered inconsistent. For samples where there is no common translation between the system output and the reference, we cannot be sure if it is consistent or not, so we exclude it for calculating the primary ranking percentage. We therefore report both the percentage of consistent samples as a fraction of consistent + inconsistent samples, and percentage of consistent samples as a fraction of the full dataset for comparison purposes.\nA.4 DISCOURSE CONNECTIVES\nConnective Classification model. We build an explicit connective classifier to identify candidates that are acting in the capacity of a discourse connective. The model consists of an LSTM layer (Hochreiter & Schmidhuber, 1997) followed by a linear layer for binary classification, initialized by ELMo embeddings (Peters et al., 2018). We use annotated data from the Penn Discourse Treebank\n(PDTBv3) (Prasad et al., 2018) and conduct crossvalidation experiments across all 25 sections. The average Precision, Recall and F1 scores from the crossvalidation experiments are reported in Table 12. Our classifier achieves an average crossvalidation F1 of 93.92, which shows that it generalizes very well. The high precision also provides certainty that the model is classifying discourse connectives reliably.\nUser Study. To confirm that the presence of the connective conveys information and contributes to the readability and understanding of the text, we conducted two user studies. For the first study, as presented in Figure 4a, participants are shown two previous sentences from the reference for context, and asked to choose between two candidate options for the sentence that may follow. These options consist of the reference translation with the connective highlighted, and the same text with the connective deleted.\nParticipants are asked to choose the sentence which more accurately conveys the intended meaning. There were two participants who annotated 200 such samples. The reference with the connective was chosen over the version without the connective with an AC1 agreement of 0.98. Table 13 shows the connectivewise breakdown.\nIn the second study, the participants were shown the reference along with the system translation that was missing the connective (Figure 4b). In this study, the setup has no artificially constructed data; the idea is to check if there is a possibility that the system translation is structured in such a way as to require no connective. However, the AC1 agreement for preferring the reference was 0.82 (2 annotators for 200 samples; different annotators from the first study) for this study as well, which is still quite high. Table 13 has the connectivewise breakdown; here we see that the results are slightly different for certain connectives, but overall the strong preference for the reference with the connective is retained. Our assumption that connectives must be translated is validated through both studies.\nNote that participants may not prefer the version without the connective due to loss of grammaticality or loss of sense information. Although indistinguishable in this setting, we argue that since both affect translation quality, it is reasonable to expect a translation for the connectives.\nNote that for both studies, participants were also given options to choose ‘Neither’ in case they didn’t prefer either choice, or ‘Invalid’ in case there was an issue with the data itself (e.g., transliteration issues, etc.); data that was marked as such was excluded from further consideration.\nA.5 TGTCON MODEL ARCHITECTURE\nHere we describe the model architecture for TGTCON. The decoder introduced in Vaswani et al. (2017) is used for encoding the target sentence and the encoder adopted from the original encoder of the Transformer architecture is used to encode context of the target side. Each part, target decoder and context encoder, is repeated 6 times (N=6). In the last layer, two multihead attention operations are performed, followed by layer normalization (similar to Vaswani et al. (2017)). The first multihead attention is the selfattention on target sentence, whereas the second multihead attention is a cross attention between representation of target and target context. These two representations are fused by gated linear sum which decides the amount of information from each representation that is to be passed on. Figure 5 shows the model architecture in detail.\nA.6 MODEL TRAINING\nTraining Data It is essential to provide the models with training data that contains adequate amounts of discourse phenomena, if we expect them to learn such phenomena. To construct such datasets, we first manually investigated the standard WMT corpora consisting of UN (Ziemski et al., 2016), Europarl (Tiedemann, 2012) and News Commentary, as well as the standard IWSLT dataset (Cettolo et al., 2012). We analyzed 100 randomly selected pairs of consecutive English sentences from each dataset, where the first sentence was treated as the context. Table 14 shows the percentage of cases containing the respective discourse phenomena.\nIn accordance with intuition, data sources based on narrative texts such as IWSLT exhibit increased amounts of discourse phenomena compared to strictly formal texts such as the UN corpus. On the other hand, the UN corpus consists of largely unrelated sentences, where only lexical consistency is wellrepresented due to the usage of very specific and strict naming of political concepts. We decided to exclude the UN corpus and combine the other datasets that have more discourse phenomena for DeEn and RuEn; for ZhEn, we keep UN and add WikiTitles to bolster the BLEU scores. Our training dataset is therefore a combination of Europarl, IWSLT and News Commentary datasets, plus UN and WikiTitles for ZhEn. The development set is a combination of WMT2016 and older WMT data (excluding 2014). Note that our validation set does not have any data in common with the benchmark testsets. We test on WMT2014 (De/RuEn)/WMT2017(ZhEn) data. We tokenize the data using Jieba for Zh and the Moses software5 for the other languages, lowercase the text, and apply BPE encodings6 from Sennrich et al. (2016). We learn the BPE encodings with the command learnjointbpeandvocab s 40000.\nTraining For the contextaware models, we use the implementations from official author repositories. As the official code for ANAPH (Voita et al., 2018b) has not been released, we implement the model in the Fairseq framework (Ott et al., 2019).7 For training the S2S and CONCAT models, we used the Transformer implementation from fairseq.We confirmed with the authors of HAN and SAN that our configurations were correct, and we took the best configuration directly from the ANAPH paper.\nA.7 MODEL PARAMETERS\nParameters used to train HAN are displayed in Table 15, and parameters for the S2S, CONCAT, ANAPH, and SAN models are displayed in Table 16.\nA.8 ERROR EXAMPLES\nExamples for the different types of errors encountered across the tasks are given in Table 17.\n5https://www.statmt.org/moses/ 6https://github.com/rsennrich/subwordnmt/ 7https://github.com/pytorch/fairseq\nTable 15: Configuration parameters for training HAN model, taken from the authors’ repository\nhttps://github.com/idiap/HAN_NMT/\nParameters Values Step 1: sentencelevel NMT encoder_type transformer decoder_type transformer enc_layers 6 dec_layers 6 label_smoothing 0.1 rnn_size 512 position_encoding  dropout 0.1 batch_size 4096 start_decay_at 20 epochs 20 max_generator_batches 16 batch_type tokens normalization tokens accum_count 4 optim adam adam_beta2 0.998 decay_method noam warmup_steps 8000 learning_rate 2 max_grad_norm 0 param_init 0 param_init_glorot  train_part sentences  Step 2: HAN encoder others  see Step 1 others  see Step 1 batch_size 1024 start_decay_at 2 epochs 10 max_generator_batches 32 train_part all context_type HAN_enc context_size 3 Step 3: HAN joint others  see Step 1 others  see Step 1 batch_size 1024 start_decay_at 2 epochs 10 max_generator_batches 32 train_part all context_type HAN_join context_size 3 train_from [HAN_enc_model]"
}
]
 2,020
 DIP BENCHMARK TESTS: EVALUATION BENCH

SP:b2fc6ca65add04fb32bcf7622d9098de9004ca2b
 [
"The authors present a framework that uses a combination of VAE and GAN to recover private user images using Side channel analysis of memory access . A VAELP model first reconstructs a coarse image from side channel information which is reshaped and processed using a convolutional network. The output of the VAELP model is refined using a GAN to add fine details. Compelling results are demonstrated for recovery of private information and state of art metrics are reported. "
]
 System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to infer program secrets based on observed side channel logs. Given the evergrowing adoption of machine learning as a service (MLaaS), image analysis software on cloud platforms has been exploited by reconstructing private user images from system side channels. Nevertheless, to date, SCA is still highly challenging, requiring technical knowledge of victim software’s internal operations. For existing SCA attacks, comprehending such internal operations requires heavyweight program analysis or manual efforts. This research proposes an attack framework to reconstruct private user images processed by media software via system side channels. The framework forms an effective workflow by incorporating convolutional networks, variational autoencoders, and generative adversarial networks. Our evaluation of two popular side channels shows that the reconstructed images consistently match user inputs, making privacy leakage attacks more practical. We also show surprising results that even onebit data read/write pattern side channels, which are deemed minimally informative, can be used to reconstruct quality images using our framework.
 [
{
"affiliations": [],
"name": "Yuanyuan Yuan"
},
{
"affiliations": [],
"name": "Shuai Wang"
},
{
"affiliations": [],
"name": "Junping Zhang"
}
]
 [
{
"authors": [
"Onur Aciicmez",
"Cetin Kaya Koc"
],
"title": "Tracedriven cache attacks on AES",
"venue": "In ICICS,",
"year": 2006
},
{
"authors": [
"Anish Athalye",
"Nicholas Carlini",
"David Wagner"
],
"title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples",
"venue": "arXiv preprint arXiv:1802.00420,",
"year": 2018
},
{
"authors": [
"Andrew Brock",
"Jeff Donahue",
"Karen Simonyan"
],
"title": "Large scale GAN training for high fidelity natural image synthesis",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Eleonora Cagli",
"Cécile Dumas",
"Emmanuel Prouff"
],
"title": "Convolutional neural networks with data augmentation against jitterbased countermeasures",
"venue": "In International Conference on Cryptographic Hardware and Embedded Systems,",
"year": 2017
},
{
"authors": [
"R.C. Chiang",
"S. Rajasekaran",
"N. Zhang",
"H.H. Huang"
],
"title": "Swiper: Exploiting virtual machine vulnerability in thirdparty clouds with competition for i/o resources",
"venue": "IEEE Transactions on Parallel and Distributed Systems,",
"year": 2015
},
{
"authors": [
"Alex Clark. Pillow"
],
"title": "Python image analysis library, 2020",
"venue": "URL https://pillow. readthedocs.io/en/stable/",
"year": 2020
},
{
"authors": [
"Bart Coppens",
"Ingrid Verbauwhede",
"Koen De Bosschere",
"Bjorn De Sutter"
],
"title": "Practical mitigations for timingbased sidechannel attacks on modern x86 processors",
"venue": "In IEEE SP,",
"year": 2009
},
{
"authors": [
"Emily Denton",
"Rob Fergus"
],
"title": "Stochastic video generation with a learned prior",
"venue": "In International Conference on Machine Learning,",
"year": 2018
},
{
"authors": [
"Xiaowan Dong",
"Zhuojia Shen",
"John Criswell",
"Alan L Cox",
"Sandhya Dwarkadas"
],
"title": "Shielding software from privileged sidechannel attacks",
"venue": "In 27th USENIX Security Symposium,",
"year": 2018
},
{
"authors": [
"Goran Doychev",
"Dominik Feld",
"Boris Kopf",
"Laurent Mauborgne",
"Jan Reineke"
],
"title": "CacheAudit: A tool for the static analysis of cache side channels",
"venue": "In USENIX Sec.,",
"year": 2013
},
{
"authors": [
"Cynthia Dwork",
"Frank McSherry",
"Kobbi Nissim",
"Adam Smith"
],
"title": "Calibrating noise to sensitivity in private data analysis",
"venue": "In Theory of cryptography conference,",
"year": 2006
},
{
"authors": [
"Daniel Gruss",
"Julian Lettner",
"Felix Schuster",
"Olya Ohrimenko",
"Istvan Haller",
"Manuel Costa"
],
"title": "Strong and efficient cache sidechannel protection using hardware transactional memory",
"venue": "In USENIX Sec.,",
"year": 2017
},
{
"authors": [
"David Gullasch",
"Endre Bangerter",
"Stephan Krenn"
],
"title": "Cache games—bringing accessbased cache attacks on AES to practice",
"venue": "In Proc. IEEE Symp. on Security and Privacy (S&P),",
"year": 2011
},
{
"authors": [
"Yong Guo",
"Qi Chen",
"Jian Chen",
"Qingyao Wu",
"Qinfeng Shi",
"Mingkui Tan"
],
"title": "Autoembedding generative adversarial networks for high resolution image synthesis",
"venue": "IEEE Transactions on Multimedia,",
"year": 2019
},
{
"authors": [
"Marcus Hähnel",
"Weidong Cui",
"Marcus Peinado"
],
"title": "Highresolution side channels for untrusted operating systems",
"venue": "USENIX Annual Technical Conference,",
"year": 2017
},
{
"authors": [
"Y. Han",
"J. Chan",
"T. Alpcan",
"C. Leckie"
],
"title": "Using virtual machine allocation policies to defend against coresident attacks in cloud computing",
"venue": "IEEE Transactions on Dependable and Secure Computing,",
"year": 2017
},
{
"authors": [
"Kaiming He",
"Xiangyu Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"title": "Deep residual learning for image recognition",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,",
"year": 2016
},
{
"authors": [
"Benjamin Hettwer",
"Stefan Gehrer",
"Tim Güneysu"
],
"title": "Profiled power analysis attacks using convolutional neural networks with domain knowledge",
"venue": "In International Conference on Selected Areas in Cryptography,",
"year": 2018
},
{
"authors": [
"Benjamin Hettwer",
"Tobias Horn",
"Stefan Gehrer",
"Tim Güneysu"
],
"title": "Encoding power traces as images for efficient sidechannel analysis",
"venue": "arXiv preprint arXiv:2004.11015,",
"year": 2020
},
{
"authors": [
"Annelie Heuser",
"Michael Zohner"
],
"title": "Intelligent machine homicide",
"venue": "In International Workshop on Constructive SideChannel Analysis and Secure Design,",
"year": 2012
},
{
"authors": [
"Sanghyun Hong",
"Michael Davinroy",
"Yiǧitcan Kaya",
"Stuart Nevans Locke",
"Ian Rackow",
"Kevin Kulda",
"Dana DachmanSoled",
"Tudor Dumitraş"
],
"title": "Security analysis of deep neural networks operating in the presence of cache sidechannel attacks",
"venue": "arXiv preprint arXiv:1810.03487,",
"year": 2018
},
{
"authors": [
"Sanghyun Hong",
"Michael Davinroy",
"Yiǧitcan Kaya",
"Dana DachmanSoled",
"Tudor Dumitraş"
],
"title": "How to 0wn the nas in your spare time",
"venue": "In International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"Seunghoon Hong",
"Dingdong Yang",
"Jongwook Choi",
"Honglak Lee"
],
"title": "Inferring semantic layout for hierarchical texttoimage synthesis",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2018
},
{
"authors": [
"Gabriel Hospodar",
"Benedikt Gierlichs",
"Elke De Mulder",
"Ingrid Verbauwhede",
"Joos Vandewalle"
],
"title": "Machine learning in sidechannel analysis: a first study",
"venue": "Journal of Cryptographic Engineering,",
"year": 2011
},
{
"authors": [
"Phillip Isola",
"JunYan Zhu",
"Tinghui Zhou",
"Alexei A Efros"
],
"title": "Imagetoimage translation with conditional adversarial networks",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,",
"year": 2017
},
{
"authors": [
"Jaehun Kim",
"Stjepan Picek",
"Annelie Heuser",
"Shivam Bhasin",
"Alan Hanjalic"
],
"title": "Make some noise. unleashing the power of convolutional neural networks for profiled sidechannel analysis",
"venue": "IACR Transactions on Cryptographic Hardware and Embedded Systems,",
"year": 2019
},
{
"authors": [
"Diederik P. Kingma",
"Jimmy Ba"
],
"title": "Adam: A method for stochastic optimization",
"venue": null,
"year": 2014
},
{
"authors": [
"Ivan Laptev",
"Tony Lindeberg"
],
"title": "Velocity adaptation of spacetime interest points",
"venue": "In Proceedings of the 17th International Conference on Pattern Recognition,",
"year": 2004
},
{
"authors": [
"F. Liu",
"Q. Ge",
"Y. Yarom",
"F. Mckeen",
"C. Rozas",
"G. Heiser",
"R.B. Lee"
],
"title": "Catalyst: Defeating lastlevel cache side channel attacks in cloud computing",
"venue": null,
"year": 2016
},
{
"authors": [
"Fangfei Liu",
"Y. Yarom",
"Qian Ge",
"G. Heiser",
"R.B. Lee"
],
"title": "Lastlevel cache sidechannel attacks are practical",
"venue": "In 2015 IEEE Symposium on Security and Privacy,",
"year": 2015
},
{
"authors": [
"Ziwei Liu",
"Ping Luo",
"Xiaogang Wang",
"Xiaoou Tang"
],
"title": "Deep learning face attributes in the wild",
"venue": "In Proceedings of International Conference on Computer Vision (ICCV),",
"year": 2015
},
{
"authors": [
"ChiKeung Luk",
"Robert Cohn",
"Robert Muth",
"Harish Patil",
"Artur Klauser",
"Geoff Lowney",
"Steven Wallace",
"Vijay Janapa Reddi",
"Kim Hazelwood"
],
"title": "Pin: building customized program analysis tools with dynamic instrumentation",
"venue": "In Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation (PLDI’05),",
"year": 2005
},
{
"authors": [
"Houssem Maghrebi",
"Thibault Portigliatti",
"Emmanuel Prouff"
],
"title": "Breaking cryptographic implementations using deep learning techniques",
"venue": "In International Conference on Security, Privacy, and Applied Cryptography Engineering,",
"year": 2016
},
{
"authors": [
"TaeHyun Oh",
"Tali Dekel",
"Changil Kim",
"Inbar Mosseri",
"William T Freeman",
"Michael Rubinstein",
"Wojciech Matusik"
],
"title": "Speech2face: Learning the face behind a voice",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2019
},
{
"authors": [
"Yossef Oren",
"Vasileios P Kemerlis",
"Simha Sethumadhavan",
"Angelos D Keromytis"
],
"title": "The spy in the sandbox: Practical cache attacks in javascript and their implications",
"venue": "In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,",
"year": 2015
},
{
"authors": [
"Alec Radford",
"Luke Metz",
"Soumith Chintala"
],
"title": "Unsupervised representation learning with deep convolutional generative adversarial networks",
"venue": "In Proceedings of 4th International Conference on Learning Representations,",
"year": 2016
},
{
"authors": [
"Himanshu Raj",
"Ripal Nathuji",
"Abhishek Singh",
"Paul England"
],
"title": "Resource management for isolation enhanced cloud services",
"venue": "In CCSW,",
"year": 2009
},
{
"authors": [
"Scott Reed",
"Zeynep Akata",
"Xinchen Yan",
"Lajanugen Logeswaran",
"Bernt Schiele",
"Honglak Lee"
],
"title": "Generative adversarial text to image synthesis",
"venue": "In International Conference on Machine Learning,",
"year": 2016
},
{
"authors": [
"Michael Schwarz",
"Moritz Lipp",
"Daniel Gruss",
"Samuel Weiser",
"Clémentine Maurice",
"Raphael Spreitzer",
"Stefan Mangard"
],
"title": "KeyDrown: Eliminating softwarebased keystroke timing sidechannel attacks",
"venue": null,
"year": 2018
},
{
"authors": [
"Shuai Wang",
"Pei Wang",
"Xiao Liu",
"Danfeng Zhang",
"Dinghao Wu"
],
"title": "CacheD: Identifying cachebased timing channels in production software",
"venue": "In 26th USENIX Security Symposium,",
"year": 2017
},
{
"authors": [
"Shuai Wang",
"Yuyan Bao",
"Xiao Liu",
"Pei Wang",
"Danfeng Zhang",
"Dinghao Wu"
],
"title": "Identifying cachebased side channels through secretaugmented abstract interpretation",
"venue": "In 28th USENIX Security Symposium,",
"year": 2019
},
{
"authors": [
"Zhenghong Wang",
"Ruby B. Lee"
],
"title": "Covert and side channels due to processor architecture",
"venue": "In ACSAC,",
"year": 2006
},
{
"authors": [
"Zhenghong Wang",
"Ruby B. Lee"
],
"title": "New cache designs for thwarting software cachebased side channel attacks",
"venue": "In ISCA,",
"year": 2007
},
{
"authors": [
"Zhenghong Wang",
"Ruby B Lee"
],
"title": "A novel cache architecture with enhanced performance and security",
"venue": "In MICRO,",
"year": 2008
},
{
"authors": [
"Zhou Wang",
"Alan C Bovik",
"Hamid R Sheikh",
"Eero P Simoncelli"
],
"title": "Image quality assessment: from error visibility to structural similarity",
"venue": "IEEE transactions on image processing,",
"year": 2004
},
{
"authors": [
"Yandong Wen",
"Bhiksha Raj",
"Rita Singh"
],
"title": "Face reconstruction from voice using generative adversarial networks",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Zhenyu Wu",
"Zhang Xu",
"Haining Wang"
],
"title": "Whispers in the hyperspace: Highspeed covert channel attacks in the cloud",
"venue": "In Presented as part of the 21st USENIX Security Symposium (USENIX Security",
"year": 2012
},
{
"authors": [
"Yuanzhong Xu",
"Weidong Cui",
"Marcus Peinado"
],
"title": "Controlledchannel attacks: Deterministic side channels for untrusted operating systems",
"venue": "In 2015 IEEE Symposium on Security and Privacy,",
"year": 2015
},
{
"authors": [
"Yuval Yarom",
"Katrina Falkner"
],
"title": "FLUSH+RELOAD: A high resolution, low noise, L3 cache sidechannel attack",
"venue": "In Proceedings of the 23rd USENIX Conference on Security Symposium,",
"year": 2014
},
{
"authors": [
"Yuval Yarom",
"Daniel Genkin",
"Nadia Heninger"
],
"title": "Cachebleed: a timing attack on openssl constanttime rsa",
"venue": "Journal of Cryptographic Engineering,",
"year": 2017
},
{
"authors": [
"Fisher Yu",
"Ari Seff",
"Yinda Zhang",
"Shuran Song",
"Thomas Funkhouser",
"Jianxiong Xiao"
],
"title": "LSUN: Construction of a largescale image dataset using deep learning with humans in the loop",
"venue": "arXiv preprint arXiv:1506.03365,",
"year": 2015
},
{
"authors": [
"Richard Zhang",
"Phillip Isola",
"Alexei A Efros",
"Eli Shechtman",
"Oliver Wang"
],
"title": "The unreasonable effectiveness of deep features as a perceptual metric",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,",
"year": 2018
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Side channel analysis (SCA) recovers program secrets based on the victim program’s nonfunctional characteristics (e.g., its execution time) that depend on the values of program secrets. SCA constitutes a major threat in today’s system and hardware security landscape. System side channels, such as CPU cache accesses and operating system (OS) page table accesses made by the victim software, are widely used to recover program secrets under various realworld scenarios (Gullasch et al., 2011; Aciicmez & Koc, 2006; Wu et al., 2012; Hähnel et al., 2017; Xu et al., 2015; Yarom et al., 2017).\nTo conduct SCA, attackers first conduct an online phase to log a trace of side channel data points made by the victim software (e.g., its accessed CPU cache lines). Then, attackers launch an offline phase to analyze the logged trace and infer secrets (e.g., private inputs). Enabled by advances in system research, the online phase can be performed smoothly (Xu et al., 2015). Nevertheless, the offline phase is challenging, requiring comprehension of victim software’s inputrelevant operations and how such operations influence side channels. The influence is programspecific and obscure (see an example in Fig. 1). Even worse, side channel data points made by realworld software are usually highly noisy. For instance, executing libjpeg (libjpeg, 2020) to decompress one unknown JPEG image produces a trace of over 700K side channel data points, where only a small portion depends on the image content. Identifying such inputdependent data points from over 700K records is extremely difficult.\nLaunching SCA to recover images processed by media software constitutes a common threat in the era of cloud computing (Xu et al., 2015; Hähnel et al., 2017), especially when machine learning as a service (MLaaS) is substantially offered (e.g., for face recognition). When envisioning the high\n∗Corresponding Author\nrisk of violating user privacy, there is a demanding need to understand the adversarial capability of reconstructing private images with SCA. To date, the offline inference phase of existing SCA attacks requires lots of manual efforts with heuristics (Xu et al., 2015; Hähnel et al., 2017). While some preliminary studies explore to use AI models to infer secrets (Hospodar et al., 2011; Kim et al., 2019; Cagli et al., 2017; Hettwer et al., 2018), their approaches are primarily driven by classification, i.e., predicting whether a particular bit of crypto key is 0 or 1. In contrast, reconstructing user private images requires to synthesize and enhance images from a more holistic perspective.\nRecent advances in generative models, such as generative adversarial network (GAN) and variational autoencoder (VAE), have enabled a major thrust in image reconstruction, given subtle signals in even crossmodal settings, e.g., voicetoface or texttoimage (Radford et al., 2016; Reed et al., 2016; Wen et al., 2019; Hong et al., 2018b). Inspired by this breakthrough, we propose an SCA framework using generative models. Given a trace of side channel data points made by image analysis software (e.g., libjpeg) when processing a user input, we reconstruct an image visually similar to the input. Each logged side channel trace, containing around a million records, is first encoded into a matrix and preprocessed by a convolutional neural network (CNN) for feature extraction. Then, a VAE network with a learned prior (referred to as VAELP) is employed to reconstruct an image with a holistic visual appearance. We further supplement VAELP with a GAN model to enhance the recovered image with vivid details. The GAN generator yields the final output.\nOur attack exploits media libraries, libjpeg (libjpeg, 2020) and uPNG (Middleditch, 2010), using two popular side channels, CPU cache line accesses and OS page table accesses. Our attack is independent of the underlying computing infrastructure (i.e., OS, hardware, image library implementation). We require enough side channel logs for training, which is consistently assumed by previous works (Heuser & Zohner, 2012; Maghrebi et al., 2016). While existing attacks particularly target libjpeg and leverage domain knowledge, system hacking, and manual efforts to infer pixel values (Xu et al., 2015; Hähnel et al., 2017), we show that images with many details can be reconstructed in an endtoend manner. We also show surprising results that enabled by our framework, side channel traces composing onebit data read/write patterns, which prima facie seems minimally informative, suffice recovering images. We conduct qualitative and quantitative evaluations on specific and general datasets representing daily images that can violate privacy if leaked. The recovered images manifest consistent visual appearances with private inputs. The recovered images also exhibit high discriminability: each recovered image (e.g., a face) can be matched to its reference input among many candidates with high accuracy. In summary, we make the following contributions:\nAt the conceptual level, we present the first generative modelbased SCA. Our novel approach learns how program inputs influence system side channels from historical side channel logs to reconstruct user private images automatically. We, for the first time, demonstrate surprisingly effective attacks toward even lowresolution side channels like onebit data read/write access patterns.\nAt the technical level, we design an effective framework by incorporating various design principles to facilitate image reconstruction from side channels. Our framework pipelines 2D CNN, VAELP, and GAN models to systematically enhance the quality of generated images.\nAt the empirical level, our evaluations show that the proposed framework can generate images with vivid details and are closely similar to reference inputs. The reconstructed images show high discriminability, making privacy leakage attacks more practical.\nThis is the first paper to conduct SCA with generative models, revealing new SCA opportunities and unknown threats. Our code is at https://github.com/genSCA/genSCA."
},
{
"heading": "2 BACKGROUND",
"text": "To formulate SCA, let the attacked program be P and its input domain be I . For a deterministic and terminating program P , the program execution can be modeled as a mapping P : I → E where E represents program runtime behavior (e.g., memory access). As a common assumption (Hähnel et al., 2017), program inputs are private and profitable for attackers. Since different inputs i, i′ ∈ I can likely induce different e, e′ ∈ E, using inputdependent e ∈ E enables to infer i. Modern computer architectures have primarily zeroed the possibility for adversaries to log e ∈ E. Nevertheless, an attacker’s view on P can be modeled as a function view : E → O that maps E to side channel observations O. Hence, the composition (view ◦ P ) : I → O maps inputs to side channel data points that can be logged by attackers. The view indicates the attacker’s capability, and for typical system security scenarios, the view is formulated as view : Emem → Ocache ∪\nOpage, where Emem denotes a trace of accessed memory locations when executing P with i, and Ocache andOpage represent CPU cache and OS page table side channels, respectively. Despite being unable to monitor Emem, attackers can log accessed cache lines Ocache or page table entries Opage derived from Emem. Attackers then infer Emem and recover i. We now concretize the procedure by introducing how SCA is used to exploit cloud platforms in a twostep approach as follows: Online Phase to Record O. Considering a cloud environment in Fig. 1(a), where two users, one normal and one malicious, deploy two virtual machine (VM) instances on the host. Private images i ∈ I uploaded by users are processed by media library P within the left VM. Modern computer design, e.g., Intel SGX (Intel, 2014), guarantees that i ∈ I and the execution of P cannot be viewed from outside the VM. However, when processing i, P usually imposes a large volume of CPU cache and page table accesses, which, as shown in Fig. 1(a), can be recorded by the colocated malicious VM or the malicious host OS in a fully automated manner (Han et al., 2017; Chiang et al., 2015; Liu et al., 2015a; Xu et al., 2015; Hähnel et al., 2017). Offline Phase to Infer i. Once side channel traces o ∈ O are collected, an offline phase is conducted to infer (view ◦P )−1 : O → I and recover i. Fig. 1(b) presents a sample code, where depending on values of input i, different memory locations (and cache lines) will be visited. Fig. 1(c) shows the corresponding trace of logged cache side channel records. To infer i, attackers eliminate the second record (since it is inputindependent), and infer i as 1 according to the first record.\nAttackers anticipate to 1) pinpointing a subset of records o∗ ⊆ o that depend on i, and to 2) recovering the mapping from o∗ to i. However, realworld side channel traces (e.g., generated by uPNG) could contain over one million records, where only a tiny portion o∗ is inputdependent. Even worse, constructing the mapping between i and o∗ requires a deep understanding of program control flows (e.g., how i affects program execution and induces cache accesses in Fig. 1(b)). To date, these tasks require either manual effort (Xu et al., 2015; Hähnel et al., 2017) or formal analysis (Doychev et al., 2013; Wang et al., 2017; 2019), which are programspecific and errorprone with low scalability.\nExisting research tackles the offline phase challenge by proposing profilingbased SCA (Maghrebi et al., 2016; Hettwer et al., 2018; Kim et al., 2019), where models are trained to approximate (view◦ P )−1 : O → I . However, existing work focuses on predicting particular bits of crypto keys from succinct side channel traces, e.g., a few hundred records (Hettwer et al., 2020). In contrast, this is the first work shows that by incorporating generative models, SCA can be conducted to exploit realworld media libraries and holistically reconstruct highquality and discriminable images."
},
{
"heading": "3 THE PROPOSED FRAMEWORK",
"text": "A common assumption shared by SCA (Heuser & Zohner, 2012; Hähnel et al., 2017; Xu et al., 2015) is that the attackers can profile the victim software locally or remotely with training inputs and collect corresponding side channel traces. We train a model to learn how different inputs can influence side channel traces. Then, given a side channel trace logged when processing an unknown image, our framework reconstructs an image that is visually similar to the unknown input.\nOur framework has two pipelined modules (see Fig. 2). Given a side channel trace Ti corresponding to processing an image i, we first encode Ti into a matrix. The encoded matrix will be fed to the VAELP module to generate image îtrace, and we further use GAN to denoise îtrace and yield the final output îGAN . We now elaborate on each module. More details are given in Appendix B."
},
{
"heading": "3.1 SIDE CHANNEL TRACE ENCODING",
"text": "Realworld software is highly complex, and processing one image could generate a huge amount of records, where only a few records are secretdependent. Processing overlong traces for previous attacks is very difficult and requires considerable domainspecific knowledge, expertise, and even manual efforts to locate and remove irrelevant records (Xu et al., 2015; Hähnel et al., 2017).\nDespite the general difficulty of processing overlong traces (each trace contains about 700K to 1.3M data points in our evaluation), we note that adjacent records on a side channel trace are often derived from the same or related modules (e.g., functions) of the victim software. Hence, we “fold” each side channel trace into a N × N × K matrix to approximate spatial locality which can be further exploited by CNNs. A trace is first divided into K segments, where N adjacent points in a segment are put into one row, in total N rows. We do zero padding. CNNs are deployed in the trace encoder of VAELP to process the encoded matrices. Overall, we have no assumption of the access pattern or the convolutional structure of the inputs. Side channel traces are generally sparse, where only a small portion is privaterelated (see Appendix J for experimental information). To smoothly process the side channel traces with generative models, we thus employ CNN models to preprocess side channel traces."
},
{
"heading": "3.2 THE VAELP MODULE",
"text": "VAELP extends standard VAE by replacing its fixed Gaussian prior with a learned prior (Denton & Fergus, 2018), which represents the latent distribution of side channel traces. VAELP is trained using both reallife images and their corresponding side channel traces. By incorporating the side channel trace encoder, we can extract latent representations from the logged side channel data points. Simultaneously, by integrating corresponding reference images during training, we provide a guideline to help the image decoder to generate quality images. As shown in Fig. 2(a), the trace encoder Enctrace (marked in blue) employs 2D CNNs and can extract features from side channel traces Ti in the encoded matrices. The output of Enctrace constitutes the learned prior distribution of latent variable, namely p(zTi), rather than a fixed Gaussian distribution that VAE usually is. The decoder Dec takes the mean of p(zTi) as its input and outputs the image generated from side channel traces. The training phase also employs the image encoder Encimage (marked in red), which accepts reference images i and outputs q(zii). We train the VAELP network by performing forward propagation separately for two different data resources. We then use two generated images to compute reconstruction loss and perform one iteration of backward propagation. Let îtrace and Dectrace be the generated image and decoder Dec by conducting forward propagation with only Ti. Similarly, let îimage and Decimage be the generated image and Dec by conducting forward propagation with only i. Parameters of Dectrace and Decimage are shared in the training phase, and the loss of the VAELP module is defined as follows:\nLossV AE−LP = L1(i, îimage) + L1(i, îtrace) + βDKL(q(zii)p(zTi))\nwhere three terms, namely, i) a reconstruction loss L1(i, îimage) derived from the reference input, ii) a reconstruction loss L1(i, îtrace) derived from the side channel trace, and iii) a KLdivergence\nthat forces q(zii) to be close to p(zTi) are subsumed. During the generation phase, we remove Encimage andDectrace. Enctrace andDecimage are retained to yield îtrace given a logged Ti. The generative process is given by:\nzTi ∼ p(zTi) îtrace ∼ p(izTi)"
},
{
"heading": "3.3 THE GAN MODULE",
"text": "VAELP module is seen to recover îtrace with relatively coarsegrained information (see Sec. 5.1). As will be elaborated in Sec. 4, different private inputs can manifest identical side channel patterns. Hence, some details are inevitably missing during input reconstruction. To tackle this inherent limitations, we further deploy a GAN module (see Fig. 2(b)) which takes the output of the VAELP module, îtrace, and generates the final output îGAN . To smoothly refine îtrace, we employ an autoencoder as the generator G of GAN. The loss of the extended GAN model is defined as follows:\nLossGAN = γL1(G(̂itrace), îtrace) +Ei∼p(i)[logD(i)] +Eîtrace∼p(̂itrace)[log(1−D(G(̂itrace)))]\nCompared with standard GAN, we extend the loss function with L1 loss of îtrace and G(̂itrace) with a weight of γ to force G to retain the holistic visual appearance delivered by îtrace. L1 loss is generally acknowledged to perform better on capturing the lowfrequency part of an image (Isola et al., 2017). Indeed, our evaluation shows that L1 loss, as a common setting, sufficiently conducts SCA and recovers user private inputs of high quality."
},
{
"heading": "4 ATTACK SETUP",
"text": "As introduced in Sec. 2, popular system side channels are primarily derived from program memory accesses. Let addr be the address of a memory location accessed by the victim software, Table 1 reports three utilized side channels and how they are derived from memory accesses. Cache line and page table side channels are commonly used for exploitation (Hähnel et al., 2017; Yarom & Falkner, 2014). Furthermore, enabled by our framework, a very lowresolution side channel of data read/write access patterns can, for the first time, be used to reconstruct highquality images. Fig. 1 holistically depicts how attackers can monitor cache and page table side channels. Data read/write patterns can be similarly recorded by monitoring how caches or page tables are visited.\nTable 1 shows that different addr can be mapped to the same side channel record. Similarly, different inputs can induce identical memory address addr. For instance, in Fig. 1(b) array[59] and array[60]will always be executed as long as i 6= 1. Two layers of manytoone mapping amplify the uncertainties of synthesizing discriminable images of high quality. It is easy to see that we are not simply mapping a trace back to an image.\nEach memory address addr has 48 bits, denoting a large range of values. We normalize the memory address value (discrete integers) into continuous values within [0, 1]. Overall, while arbitrary 48bit integers have a large range, side channel data points indeed vary within a small range. For instance, for cache based side channels, the possible values are limited by the total number of CPU cache units. In all, side channel data points are large values ranging in a relatively small range. Attack Target. We attack two media libraries, libjpeg and uPNG, to reconstruct private user images of JPEG and PNG formats. Previous image reconstruction SCA (Xu et al., 2015; Hähnel et al., 2017) only exploit libjpeg. PNG and JPEG denote very popular image compression standards, and given an image in JPEG/PNG format, libjpeg and uPNG can reverse the compression to generate a bitmap image as the basis of many image analysis tools, e.g., the Python Pillow library (Clark, 2020). The decompression process introduces many inputdependent memory accesses which, from the attacker’s perspective, can be reflected on side channels according to Table 1.\nWe share common assumptions with existing profilingbased SCA (Hospodar et al., 2011; Heuser & Zohner, 2012) that side channel traces have been well prepared for use. For our experiments, we use Pin (Luk et al., 2005), a runtime monitoring tool, to intercept memory accesses of victim software. A logged memory access trace will be converted into three separate side channel traces according to Table 1, denoting the attacker’s view on i. Each side channel trace generated by libjpeg or uPNG contains 700K to 1.3M records. See Appendix A for attack setup details. We evaluate traces logged via different side channels separately. Evaluating the composition of side channels (i.e., a “megaside channel”) is not aligned with how realworld SCA is typically launched."
},
{
"heading": "5 EVALUATION",
"text": "We present the first systematic approach to reconstructing images from side channels. There is no previous research for empirical comparison. Two closely related works provide no tools for use (Xu et al., 2015; Hähnel et al., 2017). As disclosed in their papers, manual efforts are extensively used to reconstruct images. For instance, both methods treat image color recovery as a separate task, by iterating multiple reconstruction trials and manually picking one with relatively better visual effect. Xu et al. (2015) exploit page table side channels and colors are rarely recovered. Hähnel et al. (2017) recover adequate image colors but only exploit finergrained cache side channels. Also, domainspecific knowledge on libjpeg is required to locate a tiny portion of secretdependent side channel data points for use. In contrast, we present an endtoend approach to recovering colorful images with high quality, by directly analyzing a page table or cache side channel trace of up to 1.3M records. Our attack treats victim software (libjpeg or uPNG) as a “blackbox” (no need for source code) and is independent of any underlying computing infrastructure details.\nBenchmarks. Three datasets are primarily used in the evaluation, containing typical daily images that could violate privacy if leaked to adversaries. Consistent with existing research reconstructing images from audio recording (Wen et al., 2019; Oh et al., 2019), we accelerate the model training using images of 3 × 128 × 128 pixels. Wen et al. (2019) use images of an even smaller size (3 × 64× 64). See Appendix B for model implementation and training details. (i) Largescale CelebFaces Attributes (CelebA) (Liu et al., 2015b) contains about 200K celebrity face images. We randomly select 80K images for training and 20K images for testing. (ii) KTH Human Actions (KTH) (Laptev & Lindeberg, 2004) contains videos of six actions made by 25 persons in 4 directions. For each action, we randomly select videos of 20 persons for training and use the rest for testing. We have 40K images for training and 10K images for testing. (iii) LSUN Bedroom Scene (LSUN) (Yu et al., 2015) contains images of typical bedroom scenes. We randomly select 80K images for training and 20K images for testing."
},
{
"heading": "5.1 QUALITATIVE EVALUATION RESULTS",
"text": "Fig. 3 shows the reconstructed images in different settings. In addition to reporting the final outputs (i.e., the “VAELP & GAN” column), we also report images generated only from the VAELP module in the “VAELP” column for comparison. More results are given in Appendix C. For most of the cases, the reconstructed images and reference images show consistent visual appearances,\nsuch as gender, skin color, bedroom window, and human gesture. Images in the LSUN dataset contain many subtle bedroom details, imposing relatively higher challenge for reconstruction.\nRealistic and recognizable images can be recovered using cache side channels (especially for LSUN) while images recovered from the other side channels are relatively blurry. As explained in Table 1, cache line indices (addr 6) are closer to the memory address addr (only missing the lowest 6 bits), while page table indices eliminate the lowest 12 bits from addr (typically lower bits in addr are informative and likely influenced by inputs), and each read/write pattern has only one bit.\nCompared with images generated by VAELP, the GAN module enhances the image quality by adding details and sharpening the blurred regions. GAN may overly enhance the image quality (e.g., the first LSUN case with jungle green wallpaper). However, GAN is indeed vital to exploit user privacy. For example, considering the first human gesture case in KTH, where the image reconstructed from cache side channels contains a “black bar” when using VAELP. The GAN module enhances this obscure image and reconstructs the human gesture, thus violating user privacy."
},
{
"heading": "5.2 QUANTITATIVE EVALUATION RESULTS",
"text": "To assess the generated images w.r.t. discriminability, we first study the accuracy of matching a reconstructed image î to its reference input i. To do so, we form a set of N images which include the reference input i andN−1 images randomly selected from our testing dataset. We then measure whether i appears in the topk most similar images of î. Conceptually, we mimic a deanonymization attack of user identity, where N scopes the search space attackers are facing (e.g., all subscribers of a cloud service). We use a perceptuallybased similarity metric, SSIM (Wang et al., 2004), to quantify structurallevel similarity between the reconstructed images and reference inputs.\nTable 2 reports the evaluation results of six practical settings. Consistent with Fig. 3, cache side channels help to reconstruct î with better discriminability (highest accuracy for all settings in Table 2). LSUN images have lower accuracy. As shown in Fig. 3, images in LSUN contain many subtle bedroom details and deem challenging for discrimination. We achieve the highest accuracy when k = 20 and N = 100, while the accuracy, as expected, decreases in more challenging settings (e.g., when k = 1 and N = 500). Evaluations consistently outperform the baseline — random guess. For instance, while the accuracy of random guess when k = 1 and N = 500 is 0.2%, we achieve higher accuracy (range from 0.46% to 28.68%) across all settings. Appendix D also conducts this evaluation using images generated from only VAELP. We report that better discriminability can be achieved for all datasets when supplementing VAELP with GAN.\nFor face images in CelebA, we also study how well different facial attributes are being captured in the reconstructed images. We use Face++ (fac, 2020), a commercial image analysis service, to classify reconstructed images and reference inputs w.r.t. age and gender attributes. Fig. 4 reports the confusion matrices for age and gender attributes and distributions of training data for the reference. The reconstructed images are produced using cache side channels. Overall, we achieve a good agreement for both male and female labels. We also observe correlated classification results for most age groups. The age distribution indicates that early adulthood (20–40) and middle age (40– 60) mostly dominate the dataset, which presumably induces biases in the age confusion matrix. Similarly, “male” has a smaller representation in the training set, potentially explaining its lower agreement in the confusion matrix. Appendix D also conducts this evaluation using only VAELP or using other side channels where comparable results can be achieved."
},
{
"heading": "5.3 GENERALIZABILITY",
"text": "This section explores the generalizability of our SCA. We launch attacks toward uPNG to illustrate that our method is independent of specific software implementation or image formats. For uPNG experiments, we evaluate attacks on the CelebA dataset using cache side channels. As shown in table Table 3, our attack can recover discriminable images and largely outperforms the baseline (random guess) in terms of privacy inference. See Appendix E for the reconstructed images. We also benchmark our attack to synthesize arbitrary images without using specific types of images to train the model. We instead use a general training dataset, miniImagenet. We use cache side channels to exploit libjpeg. Table 3 illustrates that considerable privacy is leaked using miniImagenet as the training set, and for all settings, our SCA largely outperforms the baseline.\nThe recovered images when using miniImagenet as the training data are visually worse than images recovered using specialized datasets. See Appendix F for the recovered images. This observation reveals the potential tradeoff of our research. Overall, training generative models using a general dataset without the knowledge of images classes seems “unconventional.” A generative model is typically trained using dataset of only one class (Guo et al., 2019), or it requires image class information to be explicitly provided during both training and generation phases (Brock et al., 2019). Nevertheless, we still evaluate our approach using a general dataset to explore the full potential of our attack. We attribute the adequate results using general datasets to discriminable features extracted by our trace encoder from images of different classes. See Appendix G for further evaluations.\nFrom a holistic perspective, the adopted training image sets constitute a predictive model of user privacy. While a particular user input is private, we assume that the functionality of victim software (e.g., a human face analysis service) is usually known to the public or can be probed prior to attacks."
},
{
"heading": "6 DISCUSSION",
"text": "This is the first paper that provides a practical solution to reconstruct images from system side channels. Proposing a novel generative model design is not our key focus. Also, despite the encouraging results, the reconstructed images show room for improvement. For instance, not all image colors were well recovered. Our manual inspection shows that compared with libjpeg, uPNG does not impose informative side channel dependency on pixel colors (i.e., different colors can likely induce identical side channel logs). Nevertheless, user privacy is presumably leaked as long as the image skeleton is recovered. Colors (or other details) can be recovered if the system community discovered more powerful (finergrained) side channels."
},
{
"heading": "7 RELATED WORK",
"text": "Exploiting System Side Channels. System side channels have been used to exploit various reallife software systems (Dong et al., 2018; Wu et al., 2012; Hähnel et al., 2017; Xu et al., 2015). The CPU cache is shown to be a rich source for SCA attacks on cloud computing environments and web browsers (Hähnel et al., 2017; Oren et al., 2015). In addition to cache line side channels analyzed in this research, other cache storage units, including cache bank and cache set, are also leveraged for SCA attacks (Yarom et al., 2017; Liu et al., 2015a). Overall, while most attacks in this field perform dedicated SCA attacks toward specific side channels, our approach is general and orthogonal to particular side channels.\nProfilingbased SCA. Machine learning techniques have substantially boosted profilingbased SCA by learning from historical data. DNN models have been used to recover secret keys from crypto libraries under different scenarios (Heuser & Zohner, 2012; Maghrebi et al., 2016; Cagli et al., 2017; Hettwer et al., 2018; Kim et al., 2019; Hettwer et al., 2020). Nevertheless, the success of existing AIbased SCA attacks is primarily driven by the model classification capability, e.g., deciding whether a particular bit of AES secret key is 0 or 1. This paper advocates the new focus on reconstructing images with generative models, leveraging another major breakthrough in DNN.\nSCA Mitigation. Existing SCA mitigation techniques can be categorized into systembased and softwarebased approaches. For systembased approaches, previous works have proposed to randomize the cache storage units or enforce finegrained isolation schemes (Wang & Lee, 2006; 2008; 2007; Liu et al., 2016). Some recent advances propose to leverage new hardware features to mitigate side channel attacks (Gruss et al., 2017). In addition, softwarelevel approaches, including designing secretindependent side channel accesses, randomizing memory access patterns (Coppens et al., 2009; Raj et al., 2009; Schwarz et al., 2018), have also been proposed. Compared with system and hardwarebased mitigations, softwarebased approaches usually do not require a customized hardware design, and are generally more flexible. Nevertheless, softwarebased approaches can usually incur extra performance penalty."
},
{
"heading": "8 CONCLUSION",
"text": "This paper has presented a general and effective SCA framework. The framework is trained with side channels to exploit media software like libjpeg and uPNG. Our evaluation shows that reconstructed images manifest close similarity with user inputs, making privacy leakage attacks practical. We also show surprising findings that enabled by our framework, attacks with lowresolution side channels become feasible."
},
{
"heading": "9 ETHICS STATEMENT",
"text": "We present a systematic and effective pipeline of recovering private images using system side channels. It is generally acknowledged that studying attack schemes helps eliminate false trust on modern computing infrastructures and promote building secure systems (Athalye et al., 2018). While there is a risk that SCA could become easier using our methods, we believe that our work will also promote rapidly detecting SCA before security breaches. As will be shown in Appendix J, our proposed technique can serve as a “bug detector” to isolate certain code blocks in image processing software that induce SCA opportunities. Developers can thus refer to our findings to patch their software.\nOur efforts could impact the evergrowing CV community in building side channelfree image analysis tools. Despite the algorithmlevel efforts to address privacy concerns, e.g., via differential privacy (Dwork et al., 2006), the infrastructurelevel vulnerabilities have not yet received enough attention, especially in the realworld scenarios like MLaaS. Our research will serve as a critical incentive to rethink tradeoffs (e.g., cost vs. security guarantee) currently taken in this field."
},
{
"heading": "10 ACKNOWLEDGEMENTS",
"text": "We thank the ICLR anonymous reviewers and area chairs for their valuable feedback. Junping Zhang is supported by National Key Research and Development Project (2018YFB1305104)."
},
{
"heading": "A ATTACK SETUP DETAILS",
"text": "In this section, we provide detailed information regarding our attack setup, including three employed side channels, the attacked libjpeg and uPNG libraries (libjpeg, 2020; Middleditch, 2010), and how we log side channel information. Three side channels are taken into account as follows:\n• Cache Line. Cache line side channel denotes one popular hardware side channel, enabling the exploitation of realworld crypto, image and text libraries (Hähnel et al., 2017; Yarom & Falkner, 2014). The CPU cache, as one key component in modern computer architecture, stores data so that future memory requests of that particular data become much faster. Data are stored in a cache block of fixed size, called the cache line. Each memory access made by victim software is projected to a cache line access. In typical cloud platforms, an attacker can monitor cache line accesses made by victim software, leading to a powerful side channel. For modern Intel architectures, the cache line index of a memory address addr can be computed as addr 6. Therefore, access of a particular cache line can be mapped back to 26 memory addresses.\n• Page Table. The OS kernel uses the page table to track mappings between virtual and physical memory addresses. Every memory access made by the victim software is converted into its physical address by querying a page table entry. In cloud computing platforms, a malicious OS on the host can observe page table accesses made by the victim software to infer its memory access (Xu et al., 2015). Given a virtual address addr, we calculate the accessed page table index by masking addr with PAGE MASKM : addr & (∼ M). M is 4095 on modern x86 architectures (pag, 2020).\n• Data Read/Write Access. Our preliminary study shows surprising results that enabled by powerful generative models, lowresolution side channels of only onebit data read/write access records can be used to recover quality images. That is, given a memory access made by the victim software, we use one bit to note whether the access is a read or write operation. Such data read/write accesses can be easily observed by monitoring either cache or page table accesses.\nSCA attacks using cache lines and page tables are wellknown and have enabled reallife exploitations in various scenarios. In contrast, to our knowledge, no realworld attacks are designed to exploit read/write patterns. This work shows that quality images can be synthesized by using such lowresolution read/write access side channels.\nPreparing Victim Software Consistent with existing SCA of exploiting media software (Xu et al., 2015; Hähnel et al., 2017), we use a widelyused JPEG image processing library, libjpeg, as the attack target. Attacking libjpeg which has been exploited in the literatures makes it easier to (conceptually) compare our approach with existing works. As mentioned in our paper, we indeed contact authors of both papers to inquire their tools; we didn’t receive any response by the time of writing. As disclosed in their papers, manual efforts are primarily used to recover images. On the other hand, there is no issue for our approach to analyze other image processing libraries as long as different inputs adequately influence side channel logs. To demonstrate the generalizability of our approach, we also attacked another image library, uPNG, which takes images of PNG format as the inputs.\nJPEG and PNG are two popular image compression standards. Given an image of JPEG/PNG formats, both image processing libraries reverse the compression step to generate a bitmap image as the prerequisite of many image analysis applications. The decompression process introduces considerable amount of inputdependent side channel accesses for both libraries. We compile both libjpeg and uPNG on 64bit Ubuntu 18.04 machine with gcc with optimizationlevel as O0 which disables all optimizations.\nWe measure the complex of libjpeg, by counting the line of code of the attacked libjpeg module. We report the attacked module, conducting JPEG image decompression under various settings, has approximately 46K lines of code. Similarly, the uPNG software has about 1.2K lines of code. In contrast, typically the crypto software attacked by previous profilingbased SCA (Hettwer et al., 2020; Gullasch et al., 2011; Aciicmez & Koc, 2006; Yarom et al., 2017) are much simpler. For\ninstance, the x86 implementation of Advanced Encryption Standard (AES) in OpenSSL has about 600 lines of code (excluding global data structures like permutation boxes).\nPreparing Side Channel Logs To prepare side channel traces, we use Pin (Luk et al., 2005), a runtime monitoring framework developed by Intel to intercept all memory accesses of our test programs when processing an input image i. Every virtual address addr on the logged trace is translated into its corresponding cache line and page table indexes following the aforementioned methods. Similarly, we intercept all memory accesses, and for each memory access, we use one bit to denote whether it is a data read or write access. All these runtime monitoring tasks can be done by writing two plugins of Pin. We report that processing each image with libjpeg can generate a trace of 730K to 760K side channel records. Processing an image with uPNG can generate a trace of about 1.3M side channel records. Recall as introduced in Sec. 3.1, each side channel trace is encoded into a N × N × K matrix and then processed by CNNs. A libjpeg trace is encoded into a 512 × 512 × 3 matrix. A uPNG trace is encoded into a 512 × 512 × 6 matrix. We do zero padding for matrices. In comparison, exploiting crypto libraries (e.g., AES decryption) generates much succinct side channel traces with only a few hundred records (Hettwer et al., 2020),\nAttacking Other Image Processing Software and DNN Models We pick libjpeg since this is the only media software attacked by existing SCA (Xu et al., 2015; Hähnel et al., 2017). We also attacked uPNG to demonstrate the generalizability of our approach. Note that libjpeg is commonly used in the image analysis pipeline, e.g., it is the prerequisite of the popular Python image processing library — Pillow (Clark, 2020).\nAlso, careful readers may wonder the feasibility of directly exploiting DNNbased image analysis tools. However, as pointed out in previous research (Hong et al., 2018a), memory access of typical DNN operations like matrix multiplications is not inputdependent. That is, while it has been demonstrated by the same authors that cache side channels is feasible to recover DNN model architectures (Hong et al., 2020), SCA is generally not feasible to recover inputs to DNN models."
},
{
"heading": "B MODEL ARCHITECTURE AND EXPERIMENT SETUP",
"text": "We now report the architecture and parameters of our framework. The Enctrace of VAELP module is reported in Table 5. Encimage and Decimage are reported in Table 4. The generator G and discriminator D of our GAN module are listed in Table 6 and Table 7, respectively.\nWe implement our framework in Pytorch (ver. 1.5.0). We use the Adam optimizer (Kingma & Ba, 2014) with learning rate ηV AE−LP = 0.0001 for the VAELP module, and learning rate ηGAN = 0.0002 for the GAN module. We set β1 = 0.5, and β2 = 0.999 for both modules. β in LossV AE−LP is 0.0001, and γ in LossGAN is 100. Minibatch size is 50.\nWe ran our experiments on an Intel Xeon CPU E52678 with 256 GB of RAM and one Nvidia GeForce RTX 2080 GPU. The training is completed at 200 iterations (100 iterations for the VAELP module, and 100 iterations for the GAN module).\nC SAMPLE OUTPUTS WHEN ATTACKING LIBJPEG\nThis section provides more images generated by our framework when attacking libjpeg. Overall, we interpret the results as promising and highly consistent across all three different datasets. As discussed in Sec. 5.1, the reconstructed and the corresponding reference images show highly consistent identities, such as gender, face orientation, human gesture, and hair style.\nVAELP Reference InputVAELP & GAN VAELP VAELPVAELP & GAN VAELP & GAN\nVAELP Reference InputVAELP & GAN VAELP VAELPVAELP & GAN VAELP & GAN\nVAELP Reference InputVAELP & GAN VAELP VAELPVAELP & GAN VAELP & GAN\nVAELP Reference InputVAELP & GAN VAELP VAELPVAELP & GAN VAELP & GAN\nVAELP Reference InputVAELP & GAN VAELP VAELPVAELP & GAN VAELP & GAN\nVAELP Reference InputVAELP & GAN VAELP VAELPVAELP & GAN VAELP & GAN"
},
{
"heading": "D QUANTITATIVE EVALUATION",
"text": "Besides the quantitative evaluation of the discriminability of reconstructed images reported in the paper, we also analyze images reconstructed by using only the VAELP module and presented the results in Table 8. Accordingly, we give the quantitative data that has been already reported in our paper for cross comparison in Table 9.\nComparing results reported in Table 8 and Table 9, we observed that by using VAELP & GAN modules together, the KTH human gesture dataset has a substantial improvement in terms of accuracy. The average accuracy of the KTH dataset is 41.9% in Table 8 while the average accuracy of the KTH dataset is increased to 49.8% in Table 9. This observation is consistent with our findings in Sec. 5.2 and some cases demonstrated in Fig. 7 and Fig. 8. Recall we observed an obscure “black bar” in KTH images reconstructed by only using VAELP module, while a human gesture can be clearly identified by enhancing the “black bar” with the GAN module. We also observed improved accuracy for the CelebA (41.0% to 41.4%) and LSUN datasets (12.6% to 12.9%) when supplementing VAELP with GAN. Overall, we interpret that better discriminability can be achieved when using GAN, implying higher success rate for attackers to deanonymize user identity and privacy.\nWe also present finegrained facial attribute comparison between the reconstructed images and reference inputs. In Sec. 5.2 we have reported gender and age confusion matrices evaluation using cache side channels (also presented in Fig. 11 for the reference and cross comparison), we also report other settings in Fig. 12, Fig. 13, and Fig. 14. To quantitatively evaluate the confusion matrices, we measure and report the weightedaverage F1 score in Table 10. Besides one case with notably increased F1 score (the gender matrix using page table), VAELP and VAELP & GAN have comparable weightedaverage F1 scores.\nE SAMPLE OUTPUTS WHEN ATTACKING UPNG\nThis section provides images generated by our framework when attacking uPNG. While the side channel traces induced by uPNG is generally less informative than libjpeg (as shown in Table 3 and discussed in Sec. 6), we still observed high visually consistent identities between the reconstructed images and their reference inputs, including gender, face orientation, hair style, hair length, whether wearing a pair of glasses and many other factors."
},
{
"heading": "F SAMPLE OUTPUTS WHEN TRAINING WITH MINIIMAGENET",
"text": "To measure our attack reconstructing arbitrary images without knowing the type of images being processed, Sec. 5.3 reports model training and attack performance using a general dataset, miniImagenet. We report that the miniImagenet dataset has in total 60K images of 100 classes, and we divide each class (with 600 images) into 480 training images and 120 testing images. As a result, we have in total 48K images for training and take the other 12K images for testing. While training generative models with a general dataset is not the common practice unless image class information is explicitly provided (Brock et al., 2019), Table 3 still reports highly encouraging results of the discriminability analysis by largely outperforming the baseline — random guess. In this section, we provide sample images generated at this step.\nThe synthesized images from the miniImagenet dataset is visually much worse that images synthesized from specific datasets (e.g., CelebA in Fig. 5). Nevertheless, by comparing the synthesized images and their corresponding references (i.e., user inputs), we interpret that images still exhibit high discriminability, in the sense that many visually consistent image skeletons and colors are recovered, indicating adequate leakage of user privacy."
},
{
"heading": "G CLASSIFYING OUTPUTS OF THE TRACE ENCODER",
"text": "By training our framework with miniImagenet and assessing the discriminability, Sec. 5.3 has demonstrated that our attack can largely outperform the baseline even with no prior knowledge on the class information of user private images. We attribute the promising evaluation results to the discriminable features successfully extracted by our trace encoder (trained with miniImagenet; see Appendix F). This section presents empirical results by assessing to what extent the latent representations derived from images of two randomly selected classes are distinguishable.\nTo this end, we build a binary classifier, by piggybacking our trace encoder with a fullyconnected (FC) layer and using Sigmoid as the activation function. As mentioned in Appendix F, the miniImagenet dataset has in total 60K images of 100 classes, and we divide each class into 480 images for training and 120 images for testing. At this step, we randomly select two classes of images, and use their training sets (in total 960 images) to train the proposed binary classifier. We then use their testing sets, including in total 240 images, to measure the binary classification accuracy. It is worth mentioning that we only train the classifier for one epoch to highlight that the latent representations extracted by the trace encoder already exhibit high quality and discriminability. We only tune the parameters of FC layer but preserve the parameters of our trace encoder.\nWe iterate this process for 100 times. Fig. 21 reports the classification accuracy across all 100 binary classification tasks. While the baseline accuracy for our binary classification task is 50%, most tasks exhibit much higher classification accuracy. We report the average accuracy is 81.6% and 32 cases exhibit a classification accuracy above 90%."
},
{
"heading": "H ROBUSTNESS TO NOISE",
"text": "This section provides experiences on the robustness of our framework by inserting noise into the side channel traces. To this end, we evaluated the following three settings:\n• Gaussian noise insertion: For each side channel data point input on the side channel trace, input = x×n+(1−x)× input, where x ∈ [0.1, 0.2, 0.5], and n denotes randomly generated noise following the Gaussian Distribution.\n• Zero replacement: Randomly set x% of the data points on the side channel trace to 0, where x ∈ [10, 20, 50].\n• Round shifting: Round shifting the side channel trace for x steps, where x ∈ [1, 10, 100].\nTable 11: Discriminability evaluation by adding Gaussian noise in the side channel trace.\nConfiguration x = 0 x = 0.1 x = 0.2 x = 0.5 k = 1 N = 100 49.98% 48.32% 39.14% 5.66% k = 5 N = 100 78.00% 76.28% 67.08% 19.62% k = 20 N = 100 91.98% 90.56% 86.22% 45.20%\nTable 12: Discriminability evaluation by randomly replacing x% side channel data points with zero.\nWe present the corresponding qualitative evaluation results in Fig. 22, Fig. 23, and Fig. 24, respectively. Accordingly, we present the quantitative results in Table 11, Table 12, and Table 13.\nOverall, despite the challenging settings, we still observed considerable visually consistent features (e.g., face orientation, hair style, gender) between the reconstructed images and their reference inputs. Fig. 24 shows that round shifting seems to impose relatively low impact on the reconstructed images (e.g., comparing x = 0 with x = 100). In contrast, a more challenging setting, adding Gaussian noise to each side channel data point, causes observable effect on the constructed images (e.g., comparing x = 0 with x = 0.5)."
},
{
"heading": "I ABLATION EXPERIMENTS",
"text": "This section provides more ablation experiments. We aim to demonstrate the necessity of image encoder and a learned prior. To this end, we launch experiences to synthesize images without using the image encoder (see the 4th row of Fig. 25), and also synthesize images with a fixed Gaussian prior (the 3rd row of Fig. 25). It is easy to see that the reconstructed images manifest much lower quality compared with images synthesized by our framework (the 2nd row of Fig. 25). In particular, images synthesized without using the image encoder are seen to contain grids (the last row of Fig. 25). It is also observed that when feeding the decoder with a fixed gaussian prior, the synthesized images are low quality as well. The outputs become not recognizable since the fixed prior has primarily no information of side channel traces. This also indicates that our model is not a simple image generator, and the trace encoder plays a vital role in the pipeline. Overall, we interpret these ablation evaluations highlight the importance of the image encoder and a learned prior in SCA. Our designed framework, by incorporating the side channel trace encoder, can effectively extract latent representations from the logged side channel data points. Simultaneously, by integrating corresponding reference images during training, we provide a guideline to help the image decoder to generate quality images.\nIn addition, we further conduct another ablation experiments regarding image level metrics. To do so, we use LPIPS (Zhang et al., 2018), imagelevel perceptual loss, to calculate the similarity of the reconstructed image and ground truth image.\nThe results are given in Table 14. Compared with our results reported in Sec. 5, the accuracy of GAN output is reduced by around 10% and reduced even more on output of VAELP. Nevertheless, the results are reasonable since the model is not train using this perceptual loss. Overall, we interpret the evaluation results show that from the perspective of “human adversaries”, the GAN module is indeed necessary."
},
{
"heading": "J MAPPING SIDE CHANNEL TRACES BACK TO INFORMATIVE FUNCTIONS",
"text": "In this section, we explore side channel traces and aim to answer the question “what information in the side channel is used for image reconstruction?” To this end, we explore which data points on the side channel trace affects most to the output by calculating the gradient. Due to the limited time, we tested cache side channel traces logged for the libjpeg software.\nGiven an image i which is not in training data, we first collect its corresponding cache side channel trace Ti when being processed by libjpeg. VAELP module will then take Ti as the input and reconstruct itrace. As shown in Fig. 26, we then calculate the loss of itrace and i, and further perform backward propagation to calculate the gradient up to Ti, namely gTi . Since gTi has the same size of Ti, we can pinpoint which part of Ti contributes most to itrace by localizing which part of gTi has a higher value. More specifically, we normalize gTi to [0, 1] and only keep values greater than a threshold T (T is set as 0.8 in our current study). Overall, we report that from the employed cache side channel trace with 754139 data points, we successfully pinpoint a set of 31 data points that primarily contribute to the private image reconstruction (see Fig. 28 and Fig. 29).\nSince each side channel data point is converted from a memory address (see Table 1), our retained “informative” side channel data points can thus be mapped back to certain functions in libjpeg. That is, we indeed use informative side channel records to isolate functions in libjpeg that potentially leak privacy. Fig. 27 depicts how this backward mapping and isolation are conducted. For instance, given a side channel record 0x55dba1628e62 marked as informative and notably contributes to the gradient, we use the address of the corresponding memory access instruction, 0x7f38daafd6b5, to isolate function jsimd convsamp float. That is, certain inputdependent memory access in jsimd convsamp float induces this cache line access and eventually contributes to the reconstruction of the user private input.\nFig. 28 reports a part of the logged side channel trace and marks several side channel data points in red which largely affects the gradient. We show their corresponding functions in libjpeg in Fig. 29. Overall, we report that this evaluation successfully pinpoints multiple critical functions in the libjpeg software that has inputdependent cache accesses. In particular, we note that this evaluation helps to “rediscover” some functions that have been reported by previous research (mostly\nwith manual effort) as vulnerable to SCA: e.g., functions write ppm, put rgb and output ppm which dump the decompressed raw image to the disk.\nMore importantly, this evaluation helps to pinpoint new functions that contribute to the private image reconstruction (and hence become vulnerabilities to SCA), such as jsimd can fdct islow, jsimd can fdct ifast, jsimd convsamp and jsimd convsamp float. These functions primarily conduct image discrete cosine transformation (DCT) and decompression. We interpret this as a highly encouraging finding, in particular:\n• As reviewed in Sec. 2, previous research uses manual effort (Xu et al., 2015; Hähnel et al., 2017) or formal methods (Doychev et al., 2013; Wang et al., 2017) to pinpoint program components that depend on inputs, which are programspecific and errorprone with low scalability.\n• This research and our study in this section actually reveals a procedure where we leverage gradient to directly highlight which part of the logged side channel trace contributes to the synthesis of outputs. Then, we map the highlighted trace back to where they are derived from in the victim software to isolate vulnerable components (i.e., a bug detection tool).\nWe view it as a promising finding: our approach depicted in this section is general and can be launched fully automatically without requiring manual efforts or formal methods which are usually not scalable. As shown in this section, our tentative study not only rediscovers vulnerabilities that were found by previous research, but helps to identify, to our best knowledge, unknown program components that are vulnerable to SCA. Looking ahead, we would like to explore this direction as a followup of the present work."
}
]
 2,021
 
SP:7fb11c941e8d79248ce5ff7caa0535a466303395
 [
"This paper proposes a method of learning ensembles that adhere to an \"ensemble version\" of the information bottleneck principle. Whereas the information bottleneck principle says the representation should avoid spurious correlations between the representation (Z) and the training data (X) that is not useful for predicting the labels (Y), i.e. I(X;Z) or I(X;ZY), this paper proposes that ensembles should additionally avoid spurious correlations between the ensemble members that aren't useful for predicting Y, i.e. I(Z_i; Z_j Y). They show empirically that the coefficient on this term increases diversity at the expense of decreasing accuracy of individual members of the ensemble."
]
 Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the tradeoff between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information, we introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features. The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant. Therefore, besides the classification loss with information bottleneck, we adversarially prevent features from being conditionally predictable from each other. We manage to reduce simultaneous errors while protecting class information. We obtain stateoftheart accuracy results on CIFAR10/100: for example, an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently. We further analyze the consequences on calibration, uncertainty estimation, outofdistribution detection and online codistillation.
 [
{
"affiliations": [],
"name": "Alexandre Rame"
}
]
 [
{
"authors": [
"Arturo Hernández Aguirre",
"Carlos A Coello Coello"
],
"title": "Mutual informationbased fitness functions for evolutionary circuit synthesis",
"venue": "In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753),",
"year": 2004
},
{
"authors": [
"Matti Aksela"
],
"title": "Comparison of classifier selection methods for improving committee performance",
"venue": "In International Workshop on Multiple Classifier Systems,",
"year": 2003
},
{
"authors": [
"Alex Alemi",
"Ian Fischer",
"Josh Dillon",
"Kevin Murphy"
],
"title": "Deep variational information bottleneck",
"venue": "In In International Conference on Learning Representations,",
"year": 2017
},
{
"authors": [
"Alexander A Alemi",
"Ian Fischer",
"Joshua V Dillon"
],
"title": "Uncertainty in the variational information bottleneck",
"venue": "arXiv preprint arXiv:1807.00906,",
"year": 2018
},
{
"authors": [
"Arsenii Ashukha",
"Alexander Lyzhov",
"Dmitry Molchanov",
"Dmitry Vetrov"
],
"title": "Pitfalls of indomain uncertainty estimation and ensembling in deep learning",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Philip Bachman",
"R Devon Hjelm",
"William Buchwalter"
],
"title": "Learning representations by maximizing mutual information across views",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Roberto Battiti"
],
"title": "Using mutual information for selecting features in supervised neural net learning",
"venue": "IEEE Transactions on neural networks,",
"year": 1994
},
{
"authors": [
"Mohamed Ishmael Belghazi",
"Aristide Baratin",
"Sai Rajeshwar",
"Sherjil Ozair",
"Yoshua Bengio",
"Aaron Courville",
"Devon Hjelm"
],
"title": "Mutual information neural estimation",
"venue": "In International Conference on Machine Learning,",
"year": 2018
},
{
"authors": [
"Anthony J Bell",
"Terrence J Sejnowski"
],
"title": "An informationmaximization approach to blind separation and blind deconvolution",
"venue": "Neural computation,",
"year": 1995
},
{
"authors": [
"Hedi BenYounes",
"Remi Cadene",
"Nicolas Thome",
"Matthieu"
],
"title": "Cord. Block: Bilinear superdiagonal fusion for visual question answering and visual relationship detection",
"venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,",
"year": 2019
},
{
"authors": [
"Sangnie Bhardwaj",
"Ian Fischer",
"Johannes Ballé",
"Troy Chinen"
],
"title": "An unsupervised informationtheoretic perceptual quality metric",
"venue": "arXiv preprint arXiv:2006.06752,",
"year": 2020
},
{
"authors": [
"Michael Blot",
"Thomas Robert",
"Nicolas Thome",
"Matthieu Cord"
],
"title": "Shade: Informationbased regularization for deep learning",
"venue": "In 2018 25th IEEE International Conference on Image Processing (ICIP),",
"year": 2018
},
{
"authors": [
"Charles Blundell",
"Julien Cornebise",
"Koray Kavukcuoglu",
"Daan Wierstra"
],
"title": "Weight uncertainty in neural networks",
"venue": "In Proceedings of the 32nd International Conference on International Conference on Machine LearningVolume",
"year": 2015
},
{
"authors": [
"Nicholas A Bowman"
],
"title": "How much diversity is enough? the curvilinear relationship between college diversity interactions and firstyear student outcomes",
"venue": "Research in Higher Education,",
"year": 2013
},
{
"authors": [
"Glenn W Brier"
],
"title": "Verification of forecasts expressed in terms of probability",
"venue": "Monthly weather review,",
"year": 1950
},
{
"authors": [
"Gavin Brown"
],
"title": "A new perspective for information theoretic feature selection",
"venue": "In Artificial intelligence and statistics,",
"year": 2009
},
{
"authors": [
"Gavin Brown",
"Jeremy Wyatt",
"Ping Sun"
],
"title": "Between two extremes: Examining decompositions of the ensemble objective function",
"venue": "In International workshop on multiple classifier systems,",
"year": 2005
},
{
"authors": [
"Gavin Brown",
"Jeremy L Wyatt",
"Peter Tiňo"
],
"title": "Managing diversity in regression ensembles",
"venue": "Journal of machine learning research,",
"year": 2005
},
{
"authors": [
"Changrui Chen",
"Xin Sun",
"Yang Hua",
"Junyu Dong",
"Hongwei Xv"
],
"title": "Learning deep relations to promote saliency detection",
"venue": "In AAAI,",
"year": 2020
},
{
"authors": [
"Defang Chen",
"JianPing Mei",
"Can Wang",
"Yan Feng",
"Chun Chen"
],
"title": "Online knowledge distillation with diverse peers",
"venue": "In AAAI,",
"year": 2020
},
{
"authors": [
"Nadezhda Chirkova",
"Ekaterina Lobacheva",
"Dmitry Vetrov"
],
"title": "Deep ensembles on a fixed memory budget: One wide network or several thinner ones",
"venue": "arXiv preprint arXiv:2005.07292,",
"year": 2020
},
{
"authors": [
"Inseop Chung",
"SeongUk Park",
"Jangho Kim",
"Nojun Kwak"
],
"title": "Featuremaplevel online adversarial knowledge distillation",
"venue": "arXiv preprint arXiv:2002.01775,",
"year": 2020
},
{
"authors": [
"Pierre Comon"
],
"title": "Independent component analysis, a new concept",
"venue": "Signal processing,",
"year": 1994
},
{
"authors": [
"Thomas M Cover"
],
"title": "Elements of information theory",
"venue": null,
"year": 1999
},
{
"authors": [
"Ali Dabouei",
"Sobhan Soleymani",
"Fariborz Taherkhani",
"Jeremy Dawson",
"Nasser M. Nasrabadi"
],
"title": "Exploiting joint robustness to adversarial perturbations",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),",
"year": 2020
},
{
"authors": [
"J. Deng",
"W. Dong",
"R. Socher",
"L.J. Li",
"K. Li",
"L. FeiFei"
],
"title": "ImageNet: A LargeScale Hierarchical Image Database",
"venue": "In CVPR09,",
"year": 2009
},
{
"authors": [
"Terrance DeVries",
"Graham W Taylor"
],
"title": "Learning confidence for outofdistribution detection in neural networks",
"venue": "arXiv preprint arXiv:1802.04865,",
"year": 2018
},
{
"authors": [
"Thomas G Dietterich"
],
"title": "Ensemble methods in machine learning",
"venue": "In International workshop on multiple classifier systems,",
"year": 2000
},
{
"authors": [
"Monroe D Donsker",
"SR Srinivasa Varadhan"
],
"title": "Asymptotic evaluation of certain markov process expectations for large time",
"venue": "i. Communications on Pure and Applied Mathematics,",
"year": 1975
},
{
"authors": [
"Nikita Dvornik",
"Cordelia Schmid",
"Julien Mairal"
],
"title": "Diversity with cooperation: Ensemble methods for fewshot classification",
"venue": "In Proceedings of the IEEE International Conference on Computer Vision,",
"year": 2019
},
{
"authors": [
"Bradley Efron",
"Robert J Tibshirani"
],
"title": "An introduction to the bootstrap",
"venue": "CRC press,",
"year": 1994
},
{
"authors": [
"Ian Fischer"
],
"title": "The conditional entropy bottleneck",
"venue": "arXiv preprint arXiv:2002.05379,",
"year": 2020
},
{
"authors": [
"Ian Fischer",
"Alexander A Alemi"
],
"title": "Ceb improves model robustness",
"venue": "arXiv preprint arXiv:2002.05380,",
"year": 2020
},
{
"authors": [
"François Fleuret"
],
"title": "Fast binary feature selection with conditional mutual information",
"venue": "Journal of Machine learning research,",
"year": 2004
},
{
"authors": [
"Yoav Freund",
"Robert Schapire"
],
"title": "A short introduction to boosting",
"venue": "JournalJapanese Society For Artificial Intelligence,",
"year": 1999
},
{
"authors": [
"Yarin Gal",
"Zoubin Ghahramani"
],
"title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning",
"venue": "In international conference on machine learning,",
"year": 2016
},
{
"authors": [
"Weihao Gao",
"Sewoong Oh",
"Pramod Viswanath"
],
"title": "Demystifying fixed knearest neighbor information estimators",
"venue": "IEEE Transactions on Information Theory,",
"year": 2018
},
{
"authors": [
"Timur Garipov",
"Pavel Izmailov",
"Dmitrii Podoprikhin",
"Dmitry P Vetrov",
"Andrew G Wilson"
],
"title": "Loss surfaces, mode connectivity, and fast ensembling of dnns",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"Tilmann Gneiting",
"Adrian E Raftery"
],
"title": "Strictly proper scoring rules, prediction, and estimation",
"venue": "Journal of the American statistical Association,",
"year": 2007
},
{
"authors": [
"Ian Goodfellow",
"Jean PougetAbadie",
"Mehdi Mirza",
"Bing Xu",
"David WardeFarley",
"Sherjil Ozair",
"Aaron Courville",
"Yoshua Bengio"
],
"title": "Generative adversarial nets",
"venue": "In Advances in neural information processing systems,",
"year": 2014
},
{
"authors": [
"Chuan Guo",
"Geoff Pleiss",
"Yu Sun",
"Kilian Q Weinberger"
],
"title": "On calibration of modern neural networks",
"venue": "In International Conference on Machine Learning,",
"year": 2017
},
{
"authors": [
"Qiushan Guo",
"Xinjiang Wang",
"Yichao Wu",
"Zhipeng Yu",
"Ding Liang",
"Xiaolin Hu",
"Ping Luo"
],
"title": "Online knowledge distillation via collaborative learning",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),",
"year": 2020
},
{
"authors": [
"Lars Kai Hansen",
"Peter Salamon"
],
"title": "Neural network ensembles",
"venue": "IEEE transactions on pattern analysis and machine intelligence,",
"year": 1990
},
{
"authors": [
"Kaiming He",
"Xiangyu Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"title": "Deep residual learning for image recognition",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,",
"year": 2016
},
{
"authors": [
"Dan Hendrycks",
"Kevin Gimpel"
],
"title": "A baseline for detecting misclassified and outofdistribution examples in neural networks",
"venue": "Proceedings of International Conference on Learning Representations,",
"year": 2017
},
{
"authors": [
"David Hin"
],
"title": "Stackoverflow vs kaggle: A study of developer discussions about data science",
"venue": "arXiv preprint arXiv:2006.08334,",
"year": 2020
},
{
"authors": [
"Geoffrey Hinton",
"Oriol Vinyals",
"Jeff Dean"
],
"title": "Distilling the knowledge in a neural network",
"venue": "stat, 1050:9,",
"year": 2015
},
{
"authors": [
"R Devon Hjelm",
"Alex Fedorov",
"Samuel LavoieMarchildon",
"Karan Grewal",
"Phil Bachman",
"Adam Trischler",
"Yoshua Bengio"
],
"title": "Learning deep representations by mutual information estimation and maximization",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Gao Huang",
"Yixuan Li",
"Geoff Pleiss",
"Zhuang Liu",
"John E. Hopcroft",
"Kilian Q. Weinberger"
],
"title": "Snapshot ensembles: Train 1, get m for free",
"venue": null,
"year": 2017
},
{
"authors": [
"Kirthevasan Kandasamy",
"Akshay Krishnamurthy",
"Barnabas Poczos",
"Larry Wasserman"
],
"title": "Nonparametric von mises estimators for entropies, divergences and mutual informations",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2015
},
{
"authors": [
"Sanjay Kariyappa",
"Moinuddin K. Qureshi"
],
"title": "Improving adversarial robustness of ensembles with diversity training",
"venue": "arXiv preprint arXiv:1901.09981,",
"year": 2019
},
{
"authors": [
"Mete Kemertas",
"Leila Pishdad",
"Konstantinos G. Derpanis",
"Afsaneh Fazly"
],
"title": "Rankmi: A mutual information maximizing ranking loss",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),",
"year": 2020
},
{
"authors": [
"Hyoungseok Kim",
"Jaekyeom Kim",
"Yeonwoo Jeong",
"Sergey Levine",
"Hyun Oh Song"
],
"title": "Emi: Exploration with mutual information",
"venue": "In International Conference on Machine Learning,",
"year": 2019
},
{
"authors": [
"Hyunjik Kim",
"Andriy Mnih"
],
"title": "Disentangling by factorising",
"venue": "In International Conference on Machine Learning,",
"year": 2018
},
{
"authors": [
"Jangho Kim",
"Minsung Hyun",
"Inseop Chung",
"Nojun Kwak"
],
"title": "Feature fusion for online mutual knowledge distillation",
"venue": "arXiv preprint arXiv:1904.09058,",
"year": 2019
},
{
"authors": [
"Wonsik Kim",
"Bhavya Goyal",
"Kunal Chawla",
"Jungmin Lee",
"Keunjoo Kwon"
],
"title": "Attentionbased ensemble for deep metric learning",
"venue": "In Proceedings of the European Conference on Computer Vision (ECCV),",
"year": 2018
},
{
"authors": [
"Diederik P. Kingma",
"Max Welling"
],
"title": "Autoencoding variational bayes",
"venue": "In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations,",
"year": 2014
},
{
"authors": [
"Andreas Kirsch",
"Clare Lyle",
"Yarin Gal"
],
"title": "Unpacking information bottlenecks: Unifying informationtheoretic objectives in deep learning",
"venue": "arXiv preprint arXiv:2003.12537,",
"year": 2020
},
{
"authors": [
"Ron Kohavi",
"David H Wolpert"
],
"title": "Bias plus variance decomposition for zeroone loss functions",
"venue": null,
"year": 1996
},
{
"authors": [
"John F Kolen",
"Jordan B Pollack"
],
"title": "Back propagation is sensitive to initial conditions",
"venue": "In Advances in neural information processing systems,",
"year": 1991
},
{
"authors": [
"Alexander Kraskov",
"Harald Stögbauer",
"Peter Grassberger"
],
"title": "Estimating mutual information",
"venue": "Physical review E,",
"year": 2004
},
{
"authors": [
"Alex Krizhevsky"
],
"title": "Learning multiple layers of features from tiny images",
"venue": null,
"year": 2009
},
{
"authors": [
"Anders Krogh",
"Jesper Vedelsby"
],
"title": "Neural network ensembles, cross validation, and active learning",
"venue": "In Advances in neural information processing systems,",
"year": 1995
},
{
"authors": [
"Ludmila I Kuncheva",
"Christopher J Whitaker"
],
"title": "Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy",
"venue": "Machine learning,",
"year": 2003
},
{
"authors": [
"Balaji Lakshminarayanan",
"Alexander Pritzel",
"Charles Blundell"
],
"title": "Simple and scalable predictive uncertainty estimation using deep ensembles",
"venue": "In Advances in neural information processing systems,",
"year": 2017
},
{
"authors": [
"Xu Lan",
"Xiatian Zhu",
"Shaogang Gong"
],
"title": "Knowledge distillation by onthefly native ensemble",
"venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"Yann LeCun",
"Sumit Chopra",
"Raia Hadsell",
"Marc’Aurelio Ranzato",
"Fu Jie Huang"
],
"title": "A tutorial on energybased learning",
"venue": "To appear in “Predicting Structured Data,",
"year": 2006
},
{
"authors": [
"Stefan Lee",
"Senthil Purushwalkam",
"Michael Cogswell",
"David J. Crandall",
"Dhruv Batra"
],
"title": "Why M heads are better than one: Training a diverse ensemble of deep networks",
"venue": null,
"year": 2015
},
{
"authors": [
"Stefan Lee",
"Michael Cogswell",
"Viresh Ranjan",
"David Crandall",
"Dhruv Batra"
],
"title": "Stochastic multiple choice learning for training diverse deep ensembles",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2016
},
{
"authors": [
"Shiyu Liang",
"Yixuan Li",
"Rayadurgam Srikant"
],
"title": "Enhancing the reliability of outofdistribution image detection in neural networks",
"venue": "In 6th International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Ralph Linsker"
],
"title": "Selforganization in a perceptual network",
"venue": null,
"year": 1988
},
{
"authors": [
"ChengLin Liu",
"Masaki Nakagawa"
],
"title": "Evaluation of prototype learning algorithms for nearestneighbor classifier in application to handwritten character recognition",
"venue": "Pattern Recognition,",
"year": 2001
},
{
"authors": [
"Yong Liu",
"Xin Yao"
],
"title": "Ensemble learning via negative correlation",
"venue": "Neural networks,",
"year": 1999
},
{
"authors": [
"Yong Liu",
"Xin Yao"
],
"title": "Simultaneous training of negatively correlated neural networks in an ensemble",
"venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),",
"year": 1999
},
{
"authors": [
"Wesley J Maddox",
"Pavel Izmailov",
"Timur Garipov",
"Dmitry P Vetrov",
"Andrew Gordon Wilson"
],
"title": "A simple baseline for bayesian uncertainty in deep learning",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Andres R. Masegosa"
],
"title": "Learning under model misspecification: Applications to variational and ensemble methods",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2020
},
{
"authors": [
"Sina Molavipour",
"Germán Bassi",
"Mikael Skoglund"
],
"title": "On neural estimators for conditional mutual information using nearest neighbors sampling",
"venue": "arXiv preprint arXiv:2006.07225,",
"year": 2020
},
{
"authors": [
"Sudipto Mukherjee",
"Himanshu Asnani",
"Sreeram Kannan"
],
"title": "Ccmi: Classifier based conditional mutual information estimation",
"venue": "In Uncertainty in Artificial Intelligence,",
"year": 2020
},
{
"authors": [
"Ryan Muldoon"
],
"title": "Social contract theory for a diverse world: Beyond tolerance",
"venue": null,
"year": 2016
},
{
"authors": [
"Mahdi Pakdaman Naeini",
"Gregory Cooper",
"Milos Hauskrecht"
],
"title": "Obtaining well calibrated probabilities using bayesian binning",
"venue": "In TwentyNinth AAAI Conference on Artificial Intelligence,",
"year": 2015
},
{
"authors": [
"Preetum Nakkiran",
"Gal Kaplun",
"Yamini Bansal",
"Tristan Yang",
"Boaz Barak",
"Ilya Sutskever"
],
"title": "Deep double descent: Where bigger models and more data hurt",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Nils J. Nilsson"
],
"title": "Learning machines: Foundations of trainable patternclassifying systems",
"venue": null,
"year": 1965
},
{
"authors": [
"Jeremy Nixon",
"Michael W Dusenberry",
"Linchuan Zhang",
"Ghassen Jerfel",
"Dustin Tran"
],
"title": "Measuring calibration in deep learning",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,",
"year": 2019
},
{
"authors": [
"Jana Novovičová",
"Petr Somol",
"Michal Haindl",
"Pavel Pudil"
],
"title": "Conditional mutual information based feature selection for classification task",
"venue": "In Iberoamerican Congress on Pattern Recognition,",
"year": 2007
},
{
"authors": [
"Sebastian Nowozin",
"Botond Cseke",
"Ryota Tomioka"
],
"title": "fgan: Training generative neural samplers using variational divergence minimization",
"venue": "In Advances in neural information processing systems,",
"year": 2016
},
{
"authors": [
"Yaniv Ovadia",
"Emily Fertig",
"Jie Ren",
"Zachary Nado",
"David Sculley",
"Sebastian Nowozin",
"Joshua Dillon",
"Balaji Lakshminarayanan",
"Jasper Snoek"
],
"title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Tianyu Pang",
"Kun Xu",
"Chao Du",
"Ning Chen",
"Jun Zhu"
],
"title": "Improving adversarial robustness via promoting ensemble diversity",
"venue": "In International Conference on Machine Learning,",
"year": 2019
},
{
"authors": [
"Hanchuan Peng",
"Fuhui Long",
"Chris Ding"
],
"title": "Feature selection based on mutual information criteria of maxdependency, maxrelevance, and minredundancy",
"venue": "IEEE Transactions on pattern analysis and machine intelligence,",
"year": 2005
},
{
"authors": [
"Zhenyue Qin",
"Dongwoo Kim"
],
"title": "Rethinking softmax with crossentropy: Neural network classifier as mutual information estimator",
"venue": "arXiv preprint arXiv:1911.10688,",
"year": 2019
},
{
"authors": [
"Hippolyt Ritter",
"Aleksandar Botev",
"David Barber"
],
"title": "A scalable laplace approximation for neural networks",
"venue": "In 6th International Conference on Learning Representations, ICLR 2018Conference Track Proceedings,",
"year": 2018
},
{
"authors": [
"Lior Rokach"
],
"title": "Ensemblebased classifiers",
"venue": "Artificial intelligence review,",
"year": 2010
},
{
"authors": [
"Andrew Slavin Ross",
"Weiwei Pan",
"Leo Anthony Celi",
"Finale DoshiVelez"
],
"title": "Ensembles of locally independent prediction models",
"venue": "In ThirtyFourth AAAI Conference on Artificial Intelligence,",
"year": 2020
},
{
"authors": [
"Adrià Ruiz",
"Jakob Verbeek"
],
"title": "Distilled Hierarchical Neural Ensembles with Adaptive Inference Cost. working paper or preprint, March 2020",
"venue": "URL https://hal.inria.fr/ hal02500660",
"year": 2020
},
{
"authors": [
"Antoine Saporta",
"Yifu Chen",
"Michael Blot",
"Matthieu Cord"
],
"title": "REVE: Regularizing Deep Learning with Variational Entropy Bound",
"venue": "IEEE International Conference on Image Processing (ICIP)",
"year": 2019
},
{
"authors": [
"Jürgen Schmidhuber"
],
"title": "Learning factorial codes by predictability minimization",
"venue": "Neural computation,",
"year": 1992
},
{
"authors": [
"Claude E Shannon"
],
"title": "A mathematical theory of communication",
"venue": "The Bell system technical journal,",
"year": 1948
},
{
"authors": [
"Changjian Shui",
"Azadeh Sadat Mozafari",
"Jonathan Marek",
"Ihsen Hedhli",
"Christian Gagné"
],
"title": "Diversity regularization in deep ensembles",
"venue": null,
"year": 2018
},
{
"authors": [
"Demetrio SierraMercado",
"Gabriel LázaroMuñoz"
],
"title": "Enhance diversity among researchers to promote participant trust in precision medicine research",
"venue": "The American Journal of Bioethics,",
"year": 2018
},
{
"authors": [
"Harshinder Singh",
"Neeraj Misra",
"Vladimir Hnizdo",
"Adam Fedorowicz",
"Eugene Demchuk"
],
"title": "Nearest neighbor estimates of entropy",
"venue": "American journal of mathematical and management sciences,",
"year": 2003
},
{
"authors": [
"Saurabh Singh",
"Derek Hoiem",
"David Forsyth"
],
"title": "Swapout: Learning an ensemble of deep architectures",
"venue": "In Advances in neural information processing systems,",
"year": 2016
},
{
"authors": [
"Samarth Sinha",
"Homanga Bharadhwaj",
"Anirudh Goyal",
"Hugo Larochelle",
"Animesh Garg",
"Florian Shkurti"
],
"title": "Dibs: Diversity inducing information bottleneck in model ensembles",
"venue": "arXiv preprint arXiv:2003.04514,",
"year": 2020
},
{
"authors": [
"Stefano Soatto",
"Alessandro Chiuso"
],
"title": "Visual representations: Defining properties and deep approximations",
"venue": null,
"year": 2014
},
{
"authors": [
"Casper Kaae Sønderby",
"Jose Caballero",
"Lucas Theis",
"Wenzhe Shi",
"Ferenc Huszár"
],
"title": "Amortised map inference for image superresolution",
"venue": null,
"year": 2016
},
{
"authors": [
"Guocong Song",
"Wei Chai"
],
"title": "Collaborative learning for deep neural networks",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"Jiaming Song",
"Stefano Ermon"
],
"title": "Understanding the limitations of variational mutual information estimators",
"venue": "In International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"Asa Cooper Stickland",
"Iain Murray"
],
"title": "Diverse ensembles improve calibration",
"venue": "arXiv preprint arXiv:2007.04206,",
"year": 2020
},
{
"authors": [
"Talia H Swartz",
"AnnGel S Palermo",
"Sandra K Masur",
"Judith A Aberg"
],
"title": "The science and value of diversity: Closing the gaps in our understanding of inclusion and diversity",
"venue": "The Journal of infectious diseases,",
"year": 2019
},
{
"authors": [
"Yonglong Tian",
"Dilip Krishnan",
"Phillip Isola"
],
"title": "Contrastive multiview coding",
"venue": null,
"year": 2020
},
{
"authors": [
"Yonglong Tian",
"Chen Sun",
"Ben Poole",
"Dilip Krishnan",
"Cordelia Schmid",
"Phillip Isola"
],
"title": "What makes for good views for contrastive learning",
"venue": "arXiv preprint arXiv:2005.10243,",
"year": 2020
},
{
"authors": [
"Naftali Tishby"
],
"title": "The information bottleneck method",
"venue": "In Proc. 37th Annual Allerton Conference on Communications, Control and Computing,",
"year": 1999
},
{
"authors": [
"Naonori Ueda",
"Ryohei Nakano"
],
"title": "Generalization error of ensemble estimators",
"venue": "In Proceedings of International Conference on Neural Networks (ICNN’96),",
"year": 1996
},
{
"authors": [
"Aaron van den Oord",
"Yazhe Li",
"Oriol Vinyals"
],
"title": "Representation learning with contrastive predictive coding",
"venue": null,
"year": 2018
},
{
"authors": [
"Bogdan Vasilescu",
"Daryl Posnett",
"Baishakhi Ray",
"Mark GJ van den Brand",
"Alexander Serebrenik",
"Premkumar Devanbu",
"Vladimir Filkov"
],
"title": "Gender and tenure diversity in github teams",
"venue": "In Proceedings of the 33rd annual ACM conference on human factors in computing systems,",
"year": 2015
},
{
"authors": [
"David H Wolpert"
],
"title": "Stacked generalization",
"venue": "Neural networks,",
"year": 1992
},
{
"authors": [
"A Wu",
"S Nowozin",
"E Meeds",
"RE Turner",
"JM HernándezLobato",
"AL Gaunt"
],
"title": "Deterministic variational inference for robust bayesian neural networks",
"venue": "In 7th International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Guile Wu",
"Shaogang Gong"
],
"title": "Peer collaborative learning for online knowledge distillation",
"venue": "arXiv preprint arXiv:2006.04147,",
"year": 2020
},
{
"authors": [
"Tailin Wu",
"Ian Fischer"
],
"title": "Phase transitions for the information bottleneck in representation learning",
"venue": null,
"year": 2020
},
{
"authors": [
"Tailin Wu",
"Ian Fischer",
"Isaac L Chuang",
"Max Tegmark"
],
"title": "Learnability for the information",
"venue": "bottleneck. Entropy,",
"year": 2019
},
{
"authors": [
"Pingmei Xu",
"Krista A Ehinger",
"Yinda Zhang",
"Adam Finkelstein",
"Sanjeev R. Kulkarni",
"Jianxiong Xiao"
],
"title": "Turkergaze: Crowdsourcing saliency with webcam based eye tracking",
"venue": "arXiv preprint arXiv:1504.06755,",
"year": 2015
},
{
"authors": [
"Yanchao Yang",
"Stefano Soatto"
],
"title": "Fda: Fourier domain adaptation for semantic segmentation",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,",
"year": 2020
},
{
"authors": [
"R.W. Yeung"
],
"title": "A new outlook on shannon’s information measures",
"venue": "IEEE Transactions on Information Theory,",
"year": 1991
},
{
"authors": [
"Fisher Yu",
"Ari Seff",
"Yinda Zhang",
"Shuran Song",
"Thomas Funkhouser",
"Jianxiong Xiao"
],
"title": "Lsun: Construction of a largescale image dataset using deep learning with humans in the loop",
"venue": "arXiv preprint arXiv:1506.03365,",
"year": 2015
},
{
"authors": [
"Tianyuan Yu",
"Da Li",
"Yongxin Yang",
"Timothy M Hospedales",
"Tao Xiang"
],
"title": "Robust person reidentification by modelling feature uncertainty",
"venue": "In Proceedings of the IEEE International Conference on Computer Vision,",
"year": 2019
},
{
"authors": [
"Sergey Zagoruyko",
"Nikos Komodakis"
],
"title": "Wide residual networks",
"venue": "Proceedings of the British Machine Vision Conference (BMVC),",
"year": 2016
},
{
"authors": [
"Ruqi Zhang",
"Chunyuan Li",
"Jianyi Zhang",
"Changyou Chen",
"Andrew Gordon Wilson"
],
"title": "Cyclical stochastic gradient mcmc for bayesian deep learning",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Ying Zhang",
"Tao Xiang",
"Timothy M Hospedales",
"Huchuan Lu"
],
"title": "Deep mutual learning",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2018
},
{
"authors": [
"Han Zhao",
"Amanda Coston",
"Tameem Adel",
"Geoffrey J. Gordon"
],
"title": "Conditional learning of fair representations",
"venue": "In International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"ZhiHua Zhou",
"Jianxin Wu",
"Wei Tang"
],
"title": "Ensembling neural networks: many could be better than all",
"venue": "Artificial intelligence,",
"year": 2002
},
{
"authors": [
"Sinha"
],
"title": "Diversity inducing Information Bottleneck in Model Ensembles",
"venue": null,
"year": 2020
},
{
"authors": [
"E[(fi − E[fi])(fj − E[fj"
],
"title": "The estimation improves when the covariance between members is zero: the reduction factor of the variance component equals to M when errors are uncorrelated. Compared to the BiasVariance Decomposition (Kohavi et al., 1996), it leads to a variance reduction",
"venue": null,
"year": 1996
},
{
"authors": [
"M . Brown"
],
"title": "in addition to the bias and variance of the individual estimators, the generalisation error of an ensemble also depends on the covariance between the individuals. This raises the interesting issue of why we should ever train ensemble members separately; why shouldn’t we try to find some way to capture the effect of the covariance in the error function?",
"venue": null,
"year": 2005
}
]
 [
{
"heading": null,
"text": "Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the tradeoff between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information, we introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features. The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant. Therefore, besides the classification loss with information bottleneck, we adversarially prevent features from being conditionally predictable from each other. We manage to reduce simultaneous errors while protecting class information. We obtain stateoftheart accuracy results on CIFAR10/100: for example, an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently. We further analyze the consequences on calibration, uncertainty estimation, outofdistribution detection and online codistillation."
},
{
"heading": "1 INTRODUCTION",
"text": "Averaging the predictions of several models can significantly improve the generalization ability of a predictive system. Due to its effectiveness, ensembling has been a popular research topic (Nilsson, 1965; Hansen & Salamon, 1990; Wolpert, 1992; Krogh & Vedelsby, 1995; Breiman, 1996; Dietterich, 2000; Zhou et al., 2002; Rokach, 2010; Ovadia et al., 2019) as a simple alternative to fully Bayesian methods (Blundell et al., 2015; Gal & Ghahramani, 2016). It is currently the de facto solution for many machine learning applications and Kaggle competitions (Hin, 2020).\nEnsembling reduces the variance of estimators (see Appendix E.1) thanks to the diversity in predictions. This reduction is most effective when errors are uncorrelated and members are diverse, i.e., when they do not simultaneously fail on the same examples. Conversely, an ensemble of M identical networks is no better than a single one. In deep ensembles (Lakshminarayanan et al., 2017), the weights are traditionally trained independently: diversity among members only relies on the randomness of the initialization and of the learning procedure. Figure 1 shows that the performance of this procedure quickly plateaus with additional members.\nTo obtain more diverse ensembles, we could adapt the training samples through bagging (Breiman, 1996) and bootstrapping (Efron & Tibshirani, 1994), but a reduction of training samples has a negative impact on members with multiple local minima (Lee et al., 2015). Sequential boosting does not scale well for timeconsuming deep learners that overfit their training dataset. Liu & Yao (1999a;b); Brown et al. (2005b) explicitly quantified the diversity and regularized members into having negatively correlated errors. However, these ideas have not significantly improved accuracy when applied to deep learning (Shui et al., 2018; Pang et al., 2019): while members should predict the same target, they force disagreements among strong learners and therefore increase their bias. It highlights the main objective and challenge of our paper: finding a training strategy to reach an improved tradeoff between ensemble diversity and individual accuracies (Masegosa, 2020).\none input ( , ) should not share more information than features from two inputs in the same class ( , ): i.e., ( ,) should not be able to differentiate (, ) and (, ).\nOur core approach is to encourage all members to predict the same thing, but for different reasons. Therefore the diversity is enforced in the features space and not on predictions. Intuitively, to maximize the impact of a new member, extracted features should bring information about the target that is absent at this time so unpredictable from other members’ features. It would remove spurious correlations, e.g. information redundantly shared among features extracted by different members but useless for class prediction. This redundancy may be caused by a detail in the image background and therefore will not be found in features extracted from other images belonging to the same class. This could make members predict badly simultaneously, as shown in Figure 2.\nOur new learning framework, called DICE, is driven by Information Bottleneck (IB) (Tishby, 1999; Alemi et al., 2017) principles, that force features to be concise by forgetting the taskirrelevant factors. Specifically, DICE leverages the Minimum Necessary Information criterion (Fischer, 2020) for deep ensembles, and aims at reducing the mutual information (MI) between features and inputs, but also information shared between features. We prevent extracted features from being redundant. As mutual information can detect arbitrary dependencies between random variables (such as symmetry, see Figure 2), we increase the distance between pairs of members: it promotes diversity by reducing predictions’ covariance. Most importantly, DICE protects features’ informativeness by conditioning mutual information upon the target. We build upon recent neural approaches (Belghazi et al., 2018) based on the DonskerVaradhan representation of the KL formulation of MI.\nWe summarize our contributions as follows:\n• We introduce DICE, a new adversarial learning framework to explicitly increase diversity in ensemble by minimizing the conditional redundancy between features.\n• We rationalize our training objective by arguments from information theory. • We propose an implementation through neural estimation of conditional redundancy.\nWe consistently improve accuracy on CIFAR10/100 as summarized in Figure 1, with better uncertainty estimation and calibration. We analyze how the two components of our loss modify the accuracydiversity tradeoff. We improve outofdistribution detection and online codistillation."
},
{
"heading": "2 DICE MODEL",
"text": "Notations Given an input distribution X , a network θ is trained to extract the best possible dense features Z to model the distribution pθ(Y X) over the targets, which should be close to the Dirac on the true label. Our approach is designed for ensembles with M members θi, i ∈ {1, . . . ,M} extracting Zi. In branchbased setup, members share lowlevel weights to reduce computation cost. We average the M predictions in inference. We initially consider an ensemble of M = 2 members.\nQuick overview First, we train each member separately for classification with information bottleneck. Second, we train members together to remove spurious redundant correlations while training adversarially a discriminator. In conclusion, members learn to classify with conditionally uncorrelated features for increased diversity. Our procedure is driven by the following theoretical findings.\n2.A DERIVING TRAINING OBJECTIVE\n2.A.1 BASELINE: NONCONDITIONAL OBJECTIVE\nThe Minimum Necessary Information (MNI) criterion from (Fischer, 2020) aims at finding minimal statistics. In deep ensembles, Z1 and Z2 should capture only minimal information from X , while preserving the necessary information about the task Y . First, we consider separately the two Markov chains Z1 ← X ↔ Y and Z2 ← X ↔ Y . As entropy measures information, entropy of Z1 and Z2 not related to Y should be minimized. We recover IB (Alemi et al., 2017) in deep ensembles: IBβib(Z1, Z2) = 1 βib\n[I(X;Z1) + I(X;Z2)] − [I(Y ;Z1) + I(Y ;Z2)] = IBβib(Z1) + IBβib(Z2). Second, let’s consider I(Z1;Z2): we minimize it following the minimality constraint of the MNI.\nIBRβib,δr (Z1, Z2) = 1 βib\nCompression︷ ︸︸ ︷ [I(X;Z1) + I(X;Z2)]− Relevancy︷ ︸︸ ︷ [I(Y ;Z1) + I(Y ;Z2)] +δr Redundancy︷ ︸︸ ︷ I(Z1;Z2)\n= IBβib(Z1) + IBβib(Z2) + δrI(Z1;Z2).\n(green vertical stripes ) with no overlap with relevancy (red stripes).\nAnalysis In this baseline criterion, relevancy encouragesZ1 and Z2 to capture information about Y . Compression & redundancy (R) split the information from X into two compressed & independent views. The relevancycompressionredundancy tradeoff depends on the values of βib & δr.\n2.A.2 DICE: CONDITIONAL OBJECTIVE\nThe problem is that the compression and redundancy terms in IBR also reduce necessary information related to Y : it is detrimental to have Z1 and Z2 fully disentangled while training them to predict the same Y . As shown on Figure 3, redundancy regions (blue horizontal stripes ) overlap with relevancy regions (red stripes). Indeed, the true constraints that the MNI criterion really entails are the following conditional equalities given Y :\nI(X;Z1Y ) = I(X;Z2Y ) = I(Z1;Z2Y ) = 0. Mutual information being nonnegative, we transform them into our main DICE objective:\nDICEβceb,δcr (Z1, Z2) = 1βceb [I(X;Z1Y ) + I(X;Z2Y )]︸ ︷︷ ︸\nConditional Compression\n− [I(Y ;Z1) + I(Y ;Z2)]︸ ︷︷ ︸ Relevancy +δcr I(Z1;Z2Y )︸ ︷︷ ︸ Conditional Redundancy\n= CEBβceb(Z1) + CEBβceb(Z2) + δcrI(Z1;Z2Y ),\n(1)\nwhere we recover two conditional entropy bottleneck (CEB) (Fischer, 2020) components, CEBβceb(Zi) = 1 βceb I(X;ZiY )− I(Y ;Zi), with βceb > 0 and δcr > 0.\nAnalysis The relevancy terms force features to be informative about the task Y . But contrary to IBR, DICE bottleneck constraints only minimize irrelevant information to Y . First, the conditional compression removes in Z1 (or Z2) information from X not relevant to Y . Second, the conditional redundancy (CR) reduces spurious correlations between members and only forces them to have independent bias, but definitely not independent features. It encourages diversity without affecting members’ individual precision as it protects information related to the target class in Z1 and Z2. Useless information from X to predict Y should certainly not be in Z1 or Z2, but it is even worse if they are in Z1 and Z2 simultaneously as it would cause simultaneous errors. Even if for i ∈ {1, 2}, reducing I(Zi, XY ) indirectly controls I(Z1, Z2Y ) (as I(Z1;Z2Y ) ≤ I(X;ZiY ) by chain rule), it is more efficient to directly target this intersection region through the CR term. In a final word, DICE is to IBR for deep ensembles as CEB is to IB for a single network.\nWe now approximate the two CEB and the CR components in DICE objective from equation 1.\n2.B APPROXIMATING DICE INTO A TRACTABLE LOSS\n2.B.1 VARIATIONAL APPROXIMATION OF CONDITIONAL ENTROPY BOTTLENECK\nWe leverage Markov assumptions in Zi ← X ↔ Y, i ∈ {1, 2} and empirically estimate on the classification training dataset of N i.i.d. points D = {xn, yn}Nn=1, yn ∈ {1, . . . ,K}. Following Fischer (2020), CEBβceb(Zi) = 1 βceb I(X;ZiY )− I(Y ;Zi) is variationally upper bounded by:\nVCEBβceb({ei, bi, ci}) = 1\nN N∑ n=1 1 βceb DKL (ei(zxn)‖bi(zyn))− E [log ci(ynei(xn, ))] . (2)\nSee explanation in Appendix E.4. ei(zx) is the true features distribution generated by the encoder, ci(yz) is a variational approximation of true distribution p(yz) by the classifier, and bi(zy) is a variational approximation of true distribution p(zy) by the backward encoder. This loss is applied separately on each member θi = {ei, ci, bi}, i ∈ {1, 2}.\nPractically, we parameterize all distributions with Gaussians. The encoder ei is a traditional neural network features extractor (e.g. ResNet32) that learns distributions (means and covariances) rather than deterministic points in the features space. That’s why ei transforms an image into 2 tensors; a featuresmean eµi (x) and a diagonal featurescovariance e σ i (x) each of size d (e.g. 64). The classifier ci is a dense layer that transforms a featuressample z into logits to be aligned with the target y through conditional cross entropy. z is obtained via reparameterization trick: z = ei(x, ) = eµi (x)+ e σ i (x) with ∼ N(0, 1). Finally, the backward encoder bi is implemented as an embedding layer of size (K, d) that maps the K classes to classfeaturesmeans bµi (zy) of size d, as we set the classfeaturescovariance to 1. The Gaussian parametrization also enables the exact computation of the DKL (see Appendix E.3), that forces (1) featuresmean e µ i (x) to converge to the classfeaturesmean bµi (zy) and (2) the predicted featurescovariance eσi (x) to be close to 1. The advantage of VCEB versus VIB (Alemi et al., 2017) is the class conditional bµi (zy) versus nonconditional bµi (z) which protects class information.\n2.B.2 ADVERSARIAL ESTIMATION OF CONDITIONAL REDUNDANCY\nTheoretical Problem We now focus on estimating I(Z1;Z2Y ), with no such Markov properties. Despite being a pivotal measure, mutual information estimation historically relied on nearest neighbors (Singh et al., 2003; Kraskov et al., 2004; Gao et al., 2018) or density kernels (Kandasamy et al., 2015) that do not scale well in high dimensions. We benefit from recent advances in neural estimation of mutual information (Belghazi et al., 2018), built on optimizing Donsker & Varadhan (1975) dual representations of the KL divergence. Mukherjee et al. (2020) extended this formulation for conditional mutual information estimation.\nCR = I(Z1;Z2Y ) = DKL(P (Z1, Z2, Y )‖P (Z1, Y )p(Z2Y )) = sup\nf Ex∼p(z1,z2,y)[f(x)]− log\n( Ex∼p(z1,y)p(z2y)[exp(f(x))] ) = Ex∼p(z1,z2,y)[f∗(x)]− log ( Ex∼p(z1,y)p(z2y)[exp(f∗(x))] ) ,\nwhere f∗ computes the pointwise likelihood ratio, i.e., f∗(z1, z2, y) = p(z1,z2,y)\np(z1,y)p(z2y) .\nEmpirical Neural Estimation We estimate CR (1) using the empirical data distribution and (2) replacing f∗ = w ∗\n1−w∗ by the output of a discriminator w, trained to imitate the optimal w ∗. Let\nB be a batch sampled from the observed joint distribution p(z1, z2, y) = p(e1(zx), e2(zx), y); we select the features extracted by the two members from one input. Let Bp be sampled from the product distribution p(z1, y)p(z2y) = p(e1(zx), y)p(z2y); we select the features extracted by the two members from two different inputs that share the same class. We train a multilayer network w on the binary task of distinguishing these two distributions with the standard crossentropy loss:\nLce(w) = − 1\nB+ Bp ∑ (z1,z2,y)∈B logw(z1, z2, y) + ∑ (z1,z′2,y)∈Bp log(1− w(z1, z′2, y)) . (3)\nIf w is calibrated (see Appendix B.3), a consistent (Mukherjee et al., 2020) estimate of CR is:\nÎCRDV = 1 B ∑\n(z1,z2,y)∈B log f(z1, z2, y)︸ ︷︷ ︸ Diversity − log\n 1 Bp ∑ (z1,z′2,y)∈Bp f(z1, z ′ 2, y)︸ ︷︷ ︸\nFake correlations\n ,with f = w 1− w .\nIntuition By training our members to minimize ÎCRDV , we force triples from the joint distribution to be indistinguishable from triples from the product distribution. Let’s imagine that two features are conditionally correlated, some spurious information is shared between features only when they are from the same input and not from two inputs (from the same class). This correlation can be informative about a detail in the background, an unexpected shape in the image, that is rarely found in samples from this input’s class. In that case, the product and joint distributions are easily distinguishable by the discriminator. The first adversarial component will force the extracted features to reduce the correlation, and ideally one of the two features loses this information: it reduces redundancy and increases diversity. The second term would create fake correlations between features from different inputs. As we are not interested in a precise estimation of the CR, we get rid of this second term that, empirically, did not increase diversity, as detailed in Appendix G.\nL̂CRDV (e1, e2) = 1 B ∑\n(z1,z2,y)∈B∼p(e1(zx),e2(zx),y)\nlog f(z1, z2, y). (4)\nSummary First, we train each member for classification with VCEB from equation 2, as shown in Step 1 from Figure 4. Second, as shown in Step 2 from Figure 4, the discriminator, conditioned on the class Y , learns to distinguish features sampled from one image versus features sampled from two images belonging to Y . Simultaneously, both members adversarially (Goodfellow et al., 2014) delete spurious correlations to reduce CR estimation from equation 4 with differentiable signals: it conditionally aligns features. We provide a pseudocode in B.4. While we derive similar losses for IBR and CEBR in Appendix E.5, the full DICE loss is finally:\nLDICE(θ1, θ2) = VCEBβceb(θ1) + VCEBβceb(θ2) + δcrL̂CRDV (e1, e2). (5)\n2.C FULL PROCEDURE WITH M MEMBERS\nWe expand our objective for an ensemble with M > 2 members. We only consider pairwise interactions for simplicity to keep quadratic rather than exponential growth in number of components and truncate higher order interactions, e.g. I(Zi;Zj , ZkY ) (see Appendix F.1). Driven by previous variational and neural estimations, we train θi = {ei, bi, ci}, i ∈ {1, . . . ,M} on:\nLDICE(θ1:M ) = M∑ i=1 VCEBβceb(θi) + δcr (M − 1) M∑ i=1 M∑ j=i+1 L̂CRDV (ei, ej), (6)\nwhile training adversariallyw on Lce. Batch B is sampled from the concatenation of joint distribution p(zi, zj , y) where i, j ∈ {1, . . . ,M}, i 6= j, while Bp is sampled from the product distribution, p(zi, y)p(zj y). We use the same discriminator w for ( M 2 ) estimates. It improves scalability by reducing the number of parameters to be learned. Indeed, an additional member in the ensemble only adds 256 ∗ d trainable weights in w, where d is the features dimension. See Appendix B.3 for additional information related to the discriminator w."
},
{
"heading": "3 RELATED WORK",
"text": "To reduce the training cost of deep ensembles (Hansen & Salamon, 1990; Lakshminarayanan et al., 2017), Huang et al. (2017) collect snapshots on training trajectories. One stage endtoend codistillation (Song & Chai, 2018; Lan et al., 2018; Chen et al., 2020b) share lowlevel features among members in branchbased ensemble while forcing each member to mimic a dynamic weighted combination of the predictions to increase individual accuracy. However both methods correlate errors among members, homogenize predictions and fail to fit the different modes of the data which overall reduce diversity.\nBeyond random initializations (Kolen & Pollack, 1991), authors implicitly introduced stochasticity into the training, by providing subsets of data to learners with bagging (Breiman, 1996) or by backpropagating subsets of gradients (Lee et al., 2016); however, the reduction of training samples hurts performance for sufficiently complex models that overfit their training dataset (Nakkiran et al., 2019). Boosting with sequential training is not suitable for deep members (Lakshminarayanan et al., 2017). Some approaches applied different data augmentations (Dvornik et al., 2019; Stickland & Murray, 2020), used different networks or hyperparameters (Singh et al., 2016; Ruiz & Verbeek, 2020; Yang & Soatto, 2020), but are not generalpurpose and depend on specific engineering choices.\nOthers explicitly encourage orthogonality of the gradients (Ross et al., 2020; Kariyappa & Qureshi, 2019; Dabouei et al., 2020) or of the predictions, by boosting (Freund & Schapire, 1999; Margineantu & Dietterich) or with a negative correlation regularization (Shui et al., 2018), but they reduce members accuracy. Secondorder PACBayes bounds motivated the diversity loss in Masegosa (2020). As far as we know, adaptive diversity promoting (ADP) (Pang et al., 2019) is the unique approach more accurate than the independent baseline: they decorrelate the nonmaximal predictions. The limited success of these logits approaches suggests that we seek diversity in features. Empirically we found that the increase of (L1, L2, − cos) distances between features (Kim et al., 2018) reduce performance: they are not invariant to variables’ symmetry. Simultaneously to our findings, Sinha et al. (2020) is somehow equivalent to our IBR objective (see Appendix C.2) but without information bottleneck motivations for the diversity loss.\nThe uniqueness of mutual information (see Appendix E.2) as a distance measure between variables has been applied in countless machine learning projects, such as reinforcement learning (Kim et al., 2019a), metric learning (Kemertas et al., 2020), or evolutionary algorithms (Aguirre & Coello, 2004). Objectives are often a tradeoff between (1) informativeness and (2) compression. In computer vision, unsupervised deep representation learning (Hjelm et al., 2019; van den Oord et al., 2018; Tian et al., 2020a; Bachman et al., 2019) maximizes correlation between features and inputs following Infomax (Linsker, 1988; Bell & Sejnowski, 1995), while discarding information not shared among different views (Bhardwaj et al., 2020), or penalizing predictability of one latent dimension given the others for disentanglement (Schmidhuber, 1992; Comon, 1994; Kingma & Welling, 2014; Kim & Mnih, 2018; Blot et al., 2018).\nThe ideal level of compression is task dependent (Soatto & Chiuso, 2014). As a selection criterion, features should not be redundant (Battiti, 1994; Peng et al., 2005) but relevant and complementary given the task (Novovičová et al., 2007; Brown, 2009). As a learning criteria, correlations between features and inputs are minimized according to Information Bottleneck (Tishby, 1999; Alemi et al., 2017; Kirsch et al., 2020; Saporta et al., 2019), while those between features and targets are maximized (LeCun et al., 2006; Qin & Kim, 2019). It forces the features to ignore taskirrelevant factors (Zhao et al., 2020), to reduce overfitting (Alemi et al., 2018) while protecting needed information (Tian et al., 2020b). Fischer & Alemi (2020) concludes in the superiority of conditional alignment to reach the MNI point."
},
{
"heading": "4 EXPERIMENTS",
"text": "In this section, we present our experimental results on the CIFAR10 and CIFAR100 (Krizhevsky et al., 2009) datasets. We detail our implementation in Appendix B. We took most hyperparameter values from Chen et al. (2020b). Hyperparameters for adversarial training and information bottleneck were finetuned on a validation dataset made of 5% of the training dataset, see Appendix D.1. Bold highlights best score. First, we show gain in accuracy. Then, we further analyze our strategy’s impacts on calibration, uncertainty estimation, outofdistribution detection and codistillation.\n4.A COMPARISON OF CLASSIFICATION ACCURACY\nTable 1: CIFAR100 ensemble classification accuracy (Top1, %).\nName Components ResNet32 ResNet110 WRN282Div. I.B. 3branch 4branch 5branch 4net 3branch 4branch 3branch 4branch 3net\nInd. 76.28±0.12 76.78± 0.19 77.24± 0.25 77.38± 0.12 80.54± 0.09 80.89± 0.31 78.83± 0.12 79.10± 0.08 80.01± 0.15\nONE (Lan et al., 2018) 75.17±0.35 75.13±0.25 75.25±0.22 76.25±0.32 78.97±0.24 79.86±0.25 78.38±0.45 78.47±0.32 77.53±0.36 OKDDip (Chen et al., 2020b) 75.37±0.32 76.85±0.25 76.95±0.18 77.27±0.31 79.07±0.27 80.46±0.35 79.01±0.19 79.32±0.17 80.02±0.14\nADP (Pang et al., 2019) Pred. 76.37±0.11 77.21±0.21 77.67±0.25 77.51±0.25 80.73±0.38 81.40± 0.27 79.21±0.19 79.71±0.18 80.01±0.17\nIB (equation 8) VIB 76.01±0.12 76.93± 0.24 77.22±0.19 77.72±0.12 80.43±0.34 81.12±0.19 79.19±0.35 79.15±0.12 80.15±0.13 CEB (equation 2) VCEB 76.36±0.06 76.98± 0.18 77.35±0.14 77.64± 0.15 81.08± 0.12 81.17± 0.16 78.92±0.08 79.20±0.13 80.38±0.18\nIBR (equation 9) R VIB 76.68±0.13 77.25± 0.13 77.77±0.21 77.84±0.12 81.34±0.21 81.38± 0.08 79.33±0.15 79.90±0.10 80.22±0.10 CEBR (equation 10) R VCEB 76.72±0.08 77.30± 0.12 77.81± 0.10 77.82± 0.11 81.52±0.11 81.55±0.33 79.25±0.15 79.98±0.07 80.35±0.15\nDICE (equation 6) CR VCEB 76.89± 0.09 77.51± 0.17 78.08± 0.18 77.92± 0.08 81.67±0.14 81.93± 0.13 79.59±0.13 80.05±0.11 80.55± 0.12\nTable 1 reports the Top1 classification accuracy averaged over 3 runs with standard deviation for CIFAR100, while Table 2 focuses on CIFAR10. {3,4,5}{branch,net} refers to the training of {3,4,5}members {with,without} lowlevel weights sharing. Ind. refers to independent deterministic deep ensembles without interactions between members (except optionally the lowlevel weights sharing). DICE surpasses concurrent approaches (summarized in Appendix C) for ResNet and WideResNet architectures, in network and even more in branch setup. We bring significant and systematic improvements to the current stateoftheart ADP (Pang et al., 2019): e.g., {+0.52,+0.30,+0.41} for {3,4,5}branches ResNet32, {+0.94,+0.53} for {3,4}branches ResNet110 and finally +0.34 for 3networks WRN282. Diversity approaches better leverage size, as shown on the main Figure 1, which is detailed in Table 8: on CIFAR100, DICE outperforms Ind. by {+0.60,+0.73,+0.84} for {3,4,5}branches ResNet32. Finally, learning only the redundancy loss without compression yields unstable results: CEB learns a distribution (at almost no extra cost) that stabilizes adversarial training (see Appendix F.1) through sampling, with lower standard deviation in results than IB (βib can hinder the learnability (Wu et al., 2019b)).\n4.B ABLATION STUDY\nBranchbased is attractive: it reduces bias by gradient diffusion among shared layers, at only a slight cost in diversity which makes our approach even more valuable. We therefore study the 4branches ResNet32 on CIFAR100 in following experiments. We ablate the two components of DICE: (1) deterministic, with VIB or VCEB, and (2) no adversarial loss, or with redundancy, conditionally or not. We measure diversity by the ratioerror (Aksela, 2003), r = NsingleNshared , which computes the ratio between the number of single errors Nsingle and of shared errors Nshared. A higher average over the( M 2 ) pairs means higher diversity as members are less likely to err on the same inputs. Our analysis remains valid for nonpairwise diversity measures, analyzed in Appendix A.5.\nIn Figure 5, CEB has slightly higher diversity than Ind.: it benefits from compression. ADP reaches higher diversity but sacrifices individual accuracies. On the contrary, codistillation OKDDip sacri\nfices diversity for individual accuracies. DICE curve is above all others, and notably δcr = 0.2 induces an optimal tradeoff between ensemble diversity and individual accuracies on validation. CEBR reaches same diversity with lower individual accuracies: information about Y is removed.\nFigure 6 shows that starting from random initializations, diversity begins small: DICE minimizes the estimated CR in features and increases diversity in predictions compared to CEB (δcr = 0.0). The effect is correlated with δcr: a high value (0.6) creates too much diversity. On the contrary, a negative value (−0.025) can decrease diversity. Figure 8 highlights opposing dynamics in accuracies.\nFigure 5: Ensemble diversity/individual accuracy tradeoff for different strategies. DICE (r. CEBR) is learned with different δcr (r. δr). Figure 6: Impact of the diversity coefficient δcr in DICE on the training dynamics on validation: CR is negatively correlated with diversity.\n4.C FURTHER ANALYSIS: UNCERTAINTY ESTIMATION AND CALIBRATION\nProcedure We follow the procedure from (Ashukha et al., 2019). To evaluate the quality of the uncertainty estimates, we reported two complementary proper scoring rules (Gneiting & Raftery, 2007); the Negative LogLikelihood (NLL) and the Brier Score (BS) (Brier, 1950). To measure the calibration, i.e., how classification confidences match the observed prediction accuracy, we report the Expected Calibration Error (ECE) (Naeini et al., 2015) and the Thresholded Adaptive Calibration Error (TACE) (Nixon et al., 2019) with 15 bins: TACE resolves some pathologies in ECE by thresholding and adaptive binning. Ashukha et al. (2019) showed that “comparison of [. . .] ensembling methods without temperature scaling (Guo et al., 2017) might not provide a fair ranking”. Therefore, we randomly divide the test set into two equal parts and compute metrics for each half using the temperature T optimized on another half: their mean is reported. Table 3 compares results after temperature scaling (TS) while those before TS are reported in Table 9 in Appendix A.6.\nResults We recover that ensembling improves performances (Ovadia et al., 2019), as one single network (1net) performs significantly worse than ensemble approaches with 4branches ResNet32. Members’ disagreements decrease internal temperature and increase uncertainty estimation. DICE performs best even after TS, and reduces NLL from 8.13 to 7.98 and BS from 3.24 to 3.12 compared to independant learning. Calibration criteria benefit from diversity though they do “not provide a consistent ranking” as stated in Ashukha et al. (2019): for example, we notice that ECE highly depends on hyperparameters, especially δcr, as shown on Figure 8 in Appendix A.4.\n4.D FURTHER ANALYSIS: DISCRIMINATOR BEHAVIOUR THROUGH OOD DETECTION\nTo measure the ability of our ensemble to distinguish in and outofdistribution (OOD) images, we consider other datasets at test time following (Hendrycks & Gimpel, 2017) (see Appendix D.2). The confidence score is estimated with the maximum softmax value: the confidence for OOD images should ideally be lower than for CIFAR100 test images.\nTemperature scaling (results in Table 7) refines performances (results without TS in Table 6). DICE beats Ind. and CEB in both cases. Moreover, we suspected that features were more correlated for OOD images: they may share redundant artifacts. DICE×w multiplies the classification logits by the mean over all pairs of 1 − w(zi, zj , ŷ), i 6= j, with predicted ŷ (as the true y is not available at test time). DICE×w performs even better than DICE+TS, but at the cost of additional operations. It shows that w can detect spurious correlations, adversarially deleted only when found in training.\n4.E FURTHER ANALYSIS: DIVERSE TEACHER FOR IMPROVED CODISTILLATION\nThe inference time in networkensembles grows linearly with M. Sharing earlyfeatures is one solution. We experiment another one by using only the Mth branch at test time. We combine DICE with OKDDip (Chen et al., 2020b): the Mth branch (= the student) learns to mimic the soft predictions from the M1 first branches (= the teacher), among which we enforce diversity. Our teacher has lower internal temperature (as shown in Experiment 4.c): DICE performs best when soft predictions are generated with lower T . We improve stateoftheart by {+0.42,+0.53} for {3,4}branches."
},
{
"heading": "5 CONCLUSION",
"text": "In this paper, we addressed the task of improving deep ensembles’ learning strategies. Motivated by arguments from information theory, we derive a novel adversarial diversity loss, based on conditional mutual information. We tackle the tradeoff between individual accuracies and ensemble diversity by deleting spurious and redundant correlations. We reach stateoftheart performance on standard image classification benchmarks. In Appendix F.2, we also show how to regularize deterministic encoders with conditional redundancy without compression: this increases the applicability of our research findings. The success of many realworld systems in production depends on the robustness of deep ensembles: we hope to pave the way towards generalpurpose strategies that go beyond independent learning."
},
{
"heading": "ACKNOWLEDGMENTS",
"text": "This work was granted access to the HPC resources of IDRIS under the allocation 20XXAD011011953 made by GENCI. We acknowledge the financial support by the ANR agency in the chair VISADEEP (project number ANR20CHIA002201). Finally, we would like to thank those who helped and supported us during these confinements, in particular Julie and Rouille."
},
{
"heading": "Appendices",
"text": "Appendix A shows additional experiments. Appendix B describes our implementation to facilitate reproduction. In Appendix C, we summarize the concurrent approaches (see Table 10). In Appendix D, we describe the datasets and the metrics used in our experiments. Appendix E clarifies certain theoretical formulations. In Appendix F, we explain that DICE is a secondorder approximation in terms of information interactions and then we try to apply our diversity regularization to deterministic encoders. Appendix G motivates the removal of the second term from our neural estimation of conditional redundancy. We conclude with a sociological analogy in Appendix H."
},
{
"heading": "A ADDITIONAL EXPERIMENTS",
"text": ""
},
{
"heading": "A.1 COMPARISONS WITH CODISTILLATION AND SNAPSHOTBASED APPROACHES",
"text": "Table 5: Ensemble Accuracy on different setups. Concurrent approaches’ accuracies are those reported in recent papers. DICE outperforms codistillation and snapshotbased ensembles collected on the training trajectory, which fail to capture the different modes of the data (Ashukha et al., 2019).\nArchitecture Concurrent Approach Baseline Ours\nDataset Backbone Structure Ens. Size Name Acc. According to Ind. Acc. DICE Acc.\nCIFAR100\nResNet32 Branches 3 CLILR (Song & Chai, 2018) 72.99 (Chen et al., 2020b) 76.28 76.89\nNets 3 DML (Zhang et al., 2018) 76.11 (Chung et al., 2020) 76.45 76.98AFD (Chung et al., 2020) 76.64 (Chung et al., 2020)\nResNet110\nBranches 3 FFL (Kim et al., 2019b) 78.22 (Wu & Gong, 2020) 80.54 81.67PCLE (Wu & Gong, 2020) 80.51 (Wu & Gong, 2020)\n4 CLILR (Song & Chai, 2018) 79.81 (Chen et al., 2020b) 80.89 81.93\nNets 5\nSWAG (Maddox et al., 2019) 77.69 (Ashukha et al., 2019)\n81.7 (Ashukha et al., 2019) 81.82 Cyclic SGLD (Zhang et al., 2019) 74.27 (Ashukha et al., 2019) Fast Geometric Ens (Garipov et al., 2018) 78.78 (Ashukha et al., 2019) Variational Inf. (FFG) (Wu et al., 2019a) 77.59 (Ashukha et al., 2019)\nKFACLaplace (Ritter et al., 2018) 77.13 (Ashukha et al., 2019) Snapshot Ensembles (Huang et al., 2017) 77.17 (Ashukha et al., 2019)\nWRN282 Nets 3 DML (Zhang et al., 2018) 79.41 (Chung et al., 2020) 80.01 80.55AFD (Chung et al., 2020) 79.78 (Chung et al., 2020)\nCIFAR10 ResNet110 Branches 3 FFL (Kim et al., 2019b) 95.01 (Wu & Gong, 2020) 95.62 95.74PCLE (Wu & Gong, 2020) 95.58 (Wu & Gong, 2020)"
},
{
"heading": "A.2 OUTOFDISTRIBUTION DETECTION",
"text": "Table 6 summarizes our OOD experiments in the 4branches ResNet32 setup. We recover that IB improves OOD detection (Alemi et al., 2018). Moreover, we empirically validate our intuition: features from indistribution images are in average less predictive from each other compared to pairs of features from OOD images. w can perform alone as a OODdetector, but is best used in complement to DICE. In DICE×w, logits are multiplied by the sigmoid output of w averaged over all pairs. Table 7 shows that temperature scaling improves all approaches without modifying ranking. Finally, DICE×w, even without TS, is better than DICE, even with TS."
},
{
"heading": "A.3 ACCURACY VERSUS SIZE",
"text": "We recover from Table 8 the Memory Split Advantage (MSA) from Chirkova et al. (2020): splitting the memory budget between three branches of ResNet32 results in better performance than spending twice the budget on one ResNet110. DICE further improves this advantage. Our framework is particularly effective in the branchbased setting, as it reduces the computational overhead (especially in terms of FLOPS) at a slight cost in diversity. A 4branches DICE ensemble has the same accuracy in average as a classical 7branches ensemble."
},
{
"heading": "A.4 TRAINING DYNAMICS IN TERMS OF ACCURACY, UNCERTAINTY ESTIMATION AND CALIBRATION",
"text": ""
},
{
"heading": "A.5 TRAINING DYNAMICS IN TERMS OF DIVERSITY",
"text": "We measured diversity in 4.b with the ratio error (Aksela, 2003). But as stated by Kuncheva & Whitaker (2003), diversity can be measured in numerous ways. For pairwise measures, we averaged over the ( M 2 ) pairs: the Qstatistics is positive when classifiers recognize the same object, the agreement score measures the frequency that both classifiers predict the same class. Note that even if we only apply pairwise constraints, we also increase nonpairwise measures: for example, the KohaviWolpert variance (Kohavi et al., 1996) which measures the variability of the predicted class, and the entropy diversity which measures overall disagreement."
},
{
"heading": "A.6 UNCERTAINTY ESTIMATION AND CALIBRATION BEFORE TEMPERATURE SCALING",
"text": ""
},
{
"heading": "B TRAINING DETAILS",
"text": ""
},
{
"heading": "B.1 GENERAL OPTIMIZATION",
"text": "Experiments Classical hyperparameters were taken from (Chen et al., 2020b) for conducting fair comparisons. Newly added hyperparameters were finetuned on a validation dataset made of 5% of the training dataset.\nArchitecture We implemented the proposed method with ResNet (He et al., 2016) and WideResNet (Zagoruyko & Komodakis, 2016) architectures. Following standard practices, we average the logits of our predictions uniformly. For branchbased ensemble, we separate the last block and the classifier of each member from the weights sharing while the other lowlevel layers were shared.\nLearning Following (Chen et al., 2020b), we used SGD with Nesterov with momentum of 0.9, minibatch size of 128, weight decay of 5e4, 300 epochs, a standard learning rate scheduler that sets values {0.1, 0.001, 0.0001} at steps {0, 150, 225} for CIFAR10/100. In CIFAR100, we additionally set the learning rate at 0.00001 at step 250. We used traditional basic data augmentation that consists of horizontal flips and a random crop of 32 pixels with a padding of 4 pixels. The learning curve is shown on Figure 8.\nB.2 INFORMATION BOTTLENECK IMPLEMENTATION\nArchitecture Features are extracted just before the dense layer since deeper layers are more semantics, of size d = {64, 128, 256} for {ResNet32, WRN282, ResNet110}. Our encoder does not provide a deterministic point in the features space but a feature distribution encoded by mean and diagonal covariance matrix. The covariance is predicted after a Softplus activation function with one additional dense layer, taking as input the features mean, with d(d+ 1) trainable weights. In training we sample once from this features distribution with the reparameterization trick. In inference, we predict from the distribution’s mean (and therefore only once). We parameterized b(zy) ∼ N(bµ(y),1) with trainable mean and unit diagonal covariance, with d additional trainable weights per class. As noticed in (Fischer & Alemi, 2020), this can be represented as a single embedding layer mapping onehot classes to ddimensional tensors. Therefore in total we only add d(d+1+K) trainable weights, that all can be discarded during inference. For VIB, the embedding bµ is shared among classes: in total it adds d(d + 2) trainable weights. Contrary to recent IB approaches (Wu et al., 2019b; Wu & Fischer, 2020; Fischer & Alemi, 2020), we only have one dense layer to predict logits after the features bottleneck, and we did not change the batch normalization, for fair comparisons with traditional ensemble methods.\nScheduling We employ the jumpstart method that facilitates the learning of bottleneckinspired models (Wu et al., 2019b; Wu & Fischer, 2020; Fischer & Alemi, 2020): we progressively anneal the value of βceb. For CIFAR10, we took the scheduling from (Fischer & Alemi, 2020), except that we widened the intervals to make the training loss decrease more smoothly: log(βceb) reaches values {100, 10, 2} at steps {0, 5, 100}. No standard scheduling was available for CIFAR100. As it is more difficult than CIFAR10, we added additional jumpepochs with lower values: log(βceb) reaches values {100, 10, 2, 1.5, 1} at steps {0, 8, 175, 250, 300}. This slow scheduling increases progressively the covariance predictions eσ(x) and facilitates learning. For VIB, we scheduled similarly using the equivalence from (Fischer, 2020): βib = βceb + 1. We found VCEB to have lower standard deviation in performances than VCEB: βib can hinder the learnability (Wu et al., 2019b). These schedulings have been used in all our setups, without and with redundancy losses, for ResNet32, ResNet110 and WRN2810, for from 1 to 10 members."
},
{
"heading": "B.3 ADVERSARIAL TRAINING IMPLEMENTATION",
"text": "Redundancy Following standard adversarial learning practices, our discriminator for redundancy estimation is a MLP with 4 layers of size {256, 256, 100, 1}, with leakyReLus of slope 0.2, optimized by RMSProp with learning rate {0.003, 0.005} for CIFAR{10, 100}. We empirically found that four steps for the discriminator for one step of the classifier increase stability. Specifically, it takes as input the concatenation of the two hidden representations of size d, sampled with a repa\nrameterization trick. Gradients are not backpropagated in the layer that predicts the covariance, as it would artificially increase the covariance to reduce the mutual information among branches. The output, followed by a sigmoid activation function, should be close to 1 (resp. 0) if the sample comes from the joint (resp. product) distribution.\nConditional Redundancy The discriminator for CR estimation needs to take into account the target class Y . It first embeds Y in an embedding layer of size 64, which is concatenated at the inputs of the first and second layers. Improved features merging method could be applied, such as BenYounes et al. (2019). The output has size K, and we select the index associated with the Y . We note in Figure 11 that our discriminator remains calibrated.\nEnsemble with M Models In the general case, we only consider pairwise interactions, therefore we need to estimate ( M 2 ) values. To reduce the number of parameters, we use only one discriminator w. Features associated with zk are filled with zeros when we sample from p(zi, zj , y) or from p(zi, y)p(zj y), where i, j, k ∈ {1, . . . ,M}, k 6= i and k 6= j. Therefore, the input tensor\nfor the discriminator is of size (M ∗ d + 64): its first layer has (M ∗ d + 64) ∗ 256 dense weights: the number of weights in w scales linearly with M and d as w’s input grows linearly, but w’s hidden size remains fixed.\nδcr value For branchbased and networkbased CIFAR100, we found δcr at {0.1, 0.15, 0.2, 0.22, 0.25} for {2, 3, 4, 5, 6} members to perform best on the validation dataset when training on 95% on the classical training dataset. For CIFAR10, {0.1} for 4 members. We found that lower values of δr were necessary for our baselines IBR and CEBR.\nScheduling For fair comparison, we apply the traditional rampup scheduling up to step 80 from the codistillation literature (Lan et al., 2018; Kim et al., 2019b; Chen et al., 2020b) to all concurrent approaches and to our redundancy training.\nSampling To sample from p(z1, z2, y), we select features extracted from one image. To sample from p(z1, y)p(z2y), we select features extracted from two different inputs, that share the same class y. In practise, we keep a memory from previous batches as the batch size is 128 whereas we have 100 classes in CIFAR100. This memory, of size M ∗ d ∗ K ∗ 4, is updated at the end of each training step. Our sampling is a special case of kNN sampling (Molavipour et al., 2020): as we sample from a discrete categorical variable, the closest neighbour has exactly the same discrete value. The training can be unstable as it minimises the divergence between two distributions. To make them overlap over the features space, we sample numsample = {4} times from the gaussian distribution of Z1 and Z2 with the reparameterization trick. This procedure is similar to instance noise (Sønderby et al., 2016) and it allows us to safely optimise w at each iteration. It gives better robustness than just giving the gaussian mean. Moreover, we progressively ease the discriminator task by scheduling the covariance through time with a linear rampup. First the covariance is set to 1 until epoch 100, then it linearly reduces to the predicted covariance eσi (x) until step 250. We sample a ratio rationegpos of one positive pair for {2, 4} negative pairs on CIFAR{10, 100}.\nClipping Following Bachman et al. (2019), we clip the density ratios (tanhclip) by computing the non linearity exp[τ tanh log[f(z1,z2,y)]τ ]. A lower τ reduces the variance of the estimation and stabilizes the training even with a strong discriminator, at the cost of additional bias. The clipping threshold τ was set to 10 as in Song & Ermon (2020)."
},
{
"heading": "B.4 PSEUDOCODE",
"text": "Algorithm 1: Full DICE Procedure for M = 2 members /* Setup */ Parameters: θ1 = {e1, b1, c1}, θ2 = {e2, b2, c2} and discriminator w, randomly initialized Input: Observations {xn, yn}Nn=1, coefficients βceb and δcr, schedulings scheceb and\nrampupendstepstartstep, clipping threshold τ , batch size b, optimisers gθ1,2 and gw, number of discriminators step nstepd, number of samples nums, ratio of positive/negative sample rationegpos\n/* Training Procedure */ 1 for s← 1 to 300 do 2 βsceb ← scheceb(startvalue=0, endvalue=βceb, step=s) 3 δscr ← rampup800 (startvalue=0, endvalue=δcr, step=s) 4 Randomly select batch {(xn, yn)}n∈B of size b // Batch Sampling /* Step 1: Classification Loss with CEB */ 5 for m← 1 to 2 do 6 zni ← e µ i (zxn) + eσi (zxn), ∀n ∈ B with ∼ N(0, 1)\n7 VCEBi ← 1b ∑ n∈B { 1βscebDKL(ei(zx n)‖bi(zyn))− log ci(ynzni }\n/* Step 2: Diversity Loss with Conditional Redundancy */ 8 for m← 1 to 2 do 9 eσ,si (zxn) = rampup250100(startvalue=1, endvalue=eσi (zxn), step=s)\n10 for k ← 1 to nums do 11 zni,k ← e µ i (zxn) + e σ,s i (zxn), ∀n ∈ B with ∼ N(0, 1)\n12 B ← {(zn1,k, zn2,k, yn)}, ∀n ∈ B, k ∈ {1, . . . , nums} // Joint Distrib. 13 L̂CRDV ← 1B ∑ t∈B log f(t) with f(t)← tanhclip( w(t)1−w(t) , τ) 14 θ1,2 ← gθ1,2(∇θ1VCEB1 +∇θ2VCEB2 + δscr∇θ1,2L̂CRDV ) // Backprop Ensemble /* Step 3: Adversarial Training */ 15 for ← 1 to nstepd do 16 B ← {(zn1,k, zn2,k, yn)}, ∀n ∈ B,∀k ∈ {1, . . . , nums} // Joint Distrib. 17 Bp ← {(zn1,k, zn ′ 2,k′ , y n)}, ∀n ∈ B,∀k ∈ {1, . . . , nums}, k′ ∈ {1, . . . , rationegpos } 18 with n′ ∈ B, yn = yn′ , n 6= n′ // Product distribution 19 w ← gw(∇wLce(w)) // Backprop Discriminator 20 Sample new zni,k\n/* Test Procedure */ Data: Inputs {xn}Tn=1 // Test Data Output: argmax\nk∈{1,...,K} ( 12 [c1(e µ 1 (zxn)) + c2(e µ 2 (zxn))]), ∀n ∈ {1, . . . , T}"
},
{
"heading": "B.5 EMPIRICAL LIMITATIONS",
"text": "Our approach relies on very recent works in neural network estimation of mutual information, that still suffer from loose approximations. Improvements in this area would facilitate our learning procedure. Our approach increases the number of operations because of the adversarial procedure, but only during training: the inference time remains the same."
},
{
"heading": "C CONCURRENT APPROACHES",
"text": "Concurrent approaches can be divided in two general patterns: they promote either individual accuracy by codistillation either ensemble diversity."
},
{
"heading": "C.1 CODISTILLATION APPROACHES",
"text": "Contrary to the traditional distillation (Hinton et al., 2015) that aligns the soft prediction between a static pretrained strong teacher towards a smaller student, online codistillation performs teaching in an endtoend onestage procedure: the teacher and the student are trained simultaneously.\nDistillation in Logits The seminal ”Deep Mutual Learning” (DML) (Zhang et al., 2018) introduced the main idea: multiple networks learn to mimic each other by reducing KLlosses between pairs of predictions. ”Collaborative learning for deep neural networks” (CLILR) (Song & Chai, 2018) used the branchbased architecture by sharing lowlevel layers to reduce the training complexity, and ”Knowledge Distillation by OntheFly Native Ensemble” (ONE) (Lan et al., 2018) used a weighted combination of logits as teacher hence providing better information to each network. ”Online Knowledge Distillation via Collaborative Learning” (KDCL) (Guo et al., 2020) computed the optimum weight on an heldout validation dataset. ”Feature Fusion for Online Mutual Knowledge Distillation” (FFL) (Kim et al., 2019b) introduced a feature fusion module. These approaches improve individual performance at the cost of increased homogenization. ”Online Knowledge Distillation with Diverse Peers” (OKDDip) (Chen et al., 2020b) slightly alleviates this problem with an asymmetric distillation and a selfattention mechanism. ”Peer Collaborative Learning for Online Knowledge Distillation” (PCL) (Wu & Gong, 2020) benefited from the meanteacher paradigm with temporal ensembling and from diverse data augmentation, at the cost of multiple inferences through the shared backbone.\nDistillation in Features Whereas all previous approaches only apply distillation on the logits, the recent ”Featuremaplevel Online Adversarial Knowledge Distillation” (AFD) (Chung et al., 2020) aligned features distributions by adversarial training. Note that this is not opposite to our approach, as they force distributions to be similar while we force them to be uncorrelated."
},
{
"heading": "C.2 DIVERSITY APPROACHES",
"text": "On the other hands, some recent papers in computer vision explicitly encourage diversity among the members with regularization losses.\nDiversity in Logits ”Diversity Regularization in Deep Ensembles” (Shui et al., 2018) applied negative correlation (Liu & Yao, 1999a) to regularize the training for improved calibration, with no impact on accuracy. ”Learning under Model Misspecification: Applications to Variational and Ensemble methods” (Masegosa, 2020) theoretically motivated the minimization of secondorder PACBayes bounds for ensembles, empirically estimated through a generalized variational method.\n”Adaptive Diversity Promoting” (ADP) (Pang et al., 2019) decorrelates only the nonmaximal predictions to maintain the individual accuracies, while promoting ensemble entropy. It forces different members to have different ranking of predictions among non maximal predictions. However, Liang et al. (2018) has shown that ranking of outputs are critical: for example, non maximal logits tend to be more separated from each other for indomain inputs compared to outofdomain inputs. Therefore individual accuracies are decreased. Coefficients α and β are respectively set to 2 and 0.5, as in the original paper.\nDiversity in Features One could think about increasing classical distances among features like L2 in (Kim et al., 2018), but in our experiments it reduces overall accuracy: it is not even invariant to linear transformations such as translation. ”Diversity inducing Information Bottleneck in Model Ensembles” from Sinha et al. (2020) trains a multibranch network and applies VIB on individual branch, by encoding p(zy) ∼ N (0, 1), which was shown to be hard to learn (Wu & Fischer, 2020). Moreover, we notice that their diversityinducing adversarial loss is an estimation of the JSdivergence between pairs of features, built on the dual f divergence representation (Nowozin et al., 2016): similar idea was recently used for saliency detection (Chen et al., 2020a). As the JSdivergence is a symmetrical formulation of the KL, we argue that DIBS and IBR share the same motivations and only have minor discrepancies: the adversarial terms in DIBS loss with both terms sampled from the same branch and both terms sampled from the same prior. In our experiments, these differences reduce overall performance. We will include their scores when they publish measurable results on CIFAR datasets or when they release their code.\nDiversity in Gradients ”Improving adversarial robustness of ensembles with diversity training.” (GAL) (Kariyappa & Qureshi, 2019) enforced diversity in the gradients with a gradient alignment loss. ”Exploiting Joint Robustness to Adversarial Perturbations” (Dabouei et al., 2020) considered the optimal bound for the similarity of gradients. However, as stated in the latter, “promoting diversity of gradient directions slightly degrades the classification performance on natural examples . . . [because] classifiers learn to discriminate input samples based on distinct sets of representative features”. Therefore we do not consider them as concurrent work."
},
{
"heading": "D EXPERIMENTAL SETUP",
"text": ""
},
{
"heading": "D.1 TRAINING DATASETS",
"text": "We train our procedure on two image classification benchmarks, CIFAR100 and CIFAR10, (Krizhevsky et al., 2009). They consist of 60k 32*32 natural and colored images in respectively 100 classes and 10 classes, with 50k training images and 10k test images. For hyperparameter selection and ablation studies, we train on 95% of the training dataset, and analyze performances on the validation dataset made of the remaining 5%."
},
{
"heading": "D.2 OOD",
"text": "Dataset We used the traditional outofdistribution datasets for CIFAR100, described in (Liang et al., 2018): TinyImageNet (Deng et al., 2009), LSUN (Yu et al., 2015), iSUN(Xu et al., 2015), and CIFAR10. We borrowed the evaluation code from https://github.com/ uoguelphmlrg/confidence_estimation (DeVries & Taylor, 2018).\nMetrics We reported the standard metrics for binary classification: FPR at 95 % TPR, Detection error, AUROC (Area Under the Receiver Operating Characteristic curve) and AUPR (Area under the PrecisionRecall curve, in or out depending on which dataset is specified as positive). See Liang et al. (2018) for definitions and interpretations of these metrics."
},
{
"heading": "E ADDITIONAL THEORETICAL ELEMENTS",
"text": ""
},
{
"heading": "E.1 BIAS VARIANCE COVARIANCE DECOMPOSITION",
"text": "The BiasVarianceCovariance Decomposition (Ueda & Nakano, 1996) generalizes the BiasVariance Decomposition (Kohavi et al., 1996) by treating the ensemble of M members as a single learning unit.\nE[(f − t)2] = bias2 + 1 M var + (1− 1 M )covar, (7)\nwith\nbias = 1\nM ∑ i (E[fi]− t),\nvar = 1\nM ∑ i E[(E[fi]− t)2],\ncovar = 1 M(M − 1) ∑ i ∑ j 6=i E[(fi − E[fi])(fj − E[fj ])].\nThe estimation improves when the covariance between members is zero: the reduction factor of the variance component equals to M when errors are uncorrelated. Compared to the BiasVariance Decomposition (Kohavi et al., 1996), it leads to a variance reduction of 1M . Brown et al. (2005a;b) summarized it this way: “in addition to the bias and variance of the individual estimators, the generalisation error of an ensemble also depends on the covariance between the individuals. This raises the interesting issue of why we should ever train ensemble members separately; why shouldn’t we try to find some way to capture the effect of the covariance in the error function?”.\nE.2 MUTUAL INFORMATION\nNobody knows what entropy really is.\nJohn Van Neumann to Claude Shannon\nAt the cornerstone of Shannon’s information theory in 1948 (Shannon, 1948), mutual information is the difference between the sum of individual entropies and the entropy of the variables considered jointly. Stated otherwise, it is the reduction in the uncertainty of one variable due to the knowledge of the other variable (Cover, 1999). Entropy owed its name to the thermodynamic measure of uncertainty introduced by Rudolf Clausius and developed by Ludwig Boltzmann.\nI(Z1;Z2) = H(Z1) +H(Z2)−H(Z1, Z2) = H(Z1)−H(Z1Z2) = DKL(P (Z1, Z2)‖P (Z1)P (Z2)).\nThe conditional mutual information generalizes mutual information when a third variable is given:\nI(Z1;Z2Y ) = DKL(P (Z1, Z2Y )‖P (Z1Y )P (Z2Y ))."
},
{
"heading": "E.3 KL BETWEEN GAUSSIANS",
"text": "The KullbackLeibler divergence (Kullback, 1959) between two gaussian distributions takes a particularly simple form:\nDKL(e(zx)‖b(zy)) = log bσ(y) eσ(x) + eσ(x)2 + (eµ(x)− bµ(y))2 2bσ(y)2 − 1 2 (Gaussian param.)\n= 1 2 [(1 + eσ(x)2 − log(eσ(x)2))︸ ︷︷ ︸\nVariance\n+(eµ(x)− bµ(y))2︸ ︷︷ ︸ Mean ]. (bσ(y) = 1)\nThe variance component forces the predicted variance eσ(x) to be close to bσ(y) = 1. The mean component forces the classembedding bµ(y) to converge to the average of the different elements\nin its class. These classembeddings are similar to classprototypes, highlighting a theoretical link between CEB (Fischer, 2020; Fischer & Alemi, 2020) and prototype based learning methods (Liu & Nakagawa, 2001)."
},
{
"heading": "E.4 DIFFERENCE BETWEEN VCEB AND VIB",
"text": "In Fischer (2020), CEB is variationally upper bounded by VCEB. We detail the computations:\nCEBβceb(Z) = 1\nβceb I(X;ZY )− I(Y ;Z) (Definition)\n= 1\nβceb [I(X,Y ;Z)− I(Y ;Z)]− I(Y ;Z) (Chain rule)\n= 1\nβceb [I(X;Z)− I(Y ;Z)]− I(Y ;Z) (Markov assumptions)\n= 1\nβceb [−H(ZX) +H(ZY )]− [H(Y )−H(Y Z)] (MI as diff. of 2 ent.)\n≤ 1 βceb [−H(ZX) +H(ZY )]− [−H(Y Z)] (Nonnegativity of ent.)\n= ∫ { 1 βceb log e(zx) p(zy) − log p(yz)}p(x, y, z)∂x∂y∂z (Definition of ent.)\n≤ ∫ { 1 βceb log e(zx) b(zy) − log c(yz)}p(x, y)e(zx)∂x∂y∂z (Variational approx.)\n≈ 1 N N∑ n=1 ∫ { 1 βceb log e(zxn) b(zyn) − log c(y nz)}e(zxn)∂z (Empirical data distrib.)\n≈ VCEBβceb(θ = {e, b, c}), (Reparameterization trick) where\nVCEBβceb(θ = {e, b, c}) = 1\nN N∑ n=1 { 1 βceb DKL(e(zxn)‖b(zyn))− E log c(yne(xn, )}.\nAs a reminder, Alemi et al. (2017) upper bounded: IBβib(Z) = 1 βib I(X;Z)− I(Y ;Z) by:\nVIBβib(θ = {e, b, c}) = 1\nN N∑ n=1 { 1 βib DKL(e(zxn)‖b(z))− E log c(yne(xn, )}. (8)\nIn VIB, all features distribution e(zx) are moved towards the same classagnostic distribution b(z) ∼ N(µ, σ), independently of y. In VCEB, e(zx) are moved towards the class conditional marginal bµ(y) ∼ N(bµ(y), bσ(y)). This is the unique difference between VIB and VCEB. VIB leads to a looser approximation with more bias than VCEB."
},
{
"heading": "E.5 TRANSFORMING IBR AND CEBR INTO TRACTABLE LOSSES",
"text": "In this section we derive the variational approximation of the IBR criterion, defined by: IBRβib,δr (Z1, Z2) = IBβib(Z1) + IBβib(Z2) + δrI(Z1;Z2).\nRedundancy Estimation To estimate the redundancy component, we apply the same procedure as for conditional redundancy but without the categorical constraint, as in the seminal work of Belghazi et al. (2018) for mutual information estimation. Let B and Bp be two random batches sampled respectively from the observed joint distribution p(z1, z2) = p(e1(zx), e2(zx)) and the product distribution p(z1)p(z2) = p(e1(zx))p(e2(zx′)), where x, x′ are two inputs that may not belong to the same class. We similarly train a network w that tries to discriminate these two distributions. With f = w1−w , the redundancy estimation is:\nÎRDV = 1 B ∑\n(z1,z2)∈B log f(z1, z2)︸ ︷︷ ︸ Diversity − log( 1 Bp ∑ (z1,z′2)∈Bp f(z1, z ′ 2)),\nand the final loss: L̂RDV (e1, e2) = 1 B ∑\n(z1,z2)∈B\nlog f(z1, z2).\nIBR Finally we train θ1 = {e1, b1, c1} and θ2 = {e2, b2, c2} jointly by minimizing: LIBR(θ1, θ2) = VIBβib(θ1) + VIBβib(θ2) + δrL̂RDV (e1, e2). (9)\nCEBR For ablation study, we also consider a criterion that would benefit from CEB’s tight approximation but with nonconditional redundancy regularization:\nLCEBR(θ1, θ2) = VCEBβceb(θ1) + VCEBβceb(θ2) + δrL̂RDV (e1, e2). (10)"
},
{
"heading": "F FIRST, SECOND AND HIGHERORDER INFORMATION INTERACTIONS",
"text": ""
},
{
"heading": "F.1 DICE REDUCES FIRST AND SECOND ORDER INTERACTIONS",
"text": "Applying informationtheoretic principles for deep ensembles leads to tackling interactions among features through conditional mutual information minimization. We define the order of an information interaction as the number of different extracted features involved.\nFirst Order Tackling the firstorder interaction I(X;ZiY ) with VCEB empirically increased overall performance compared to ensembles of deterministic features extractors learned with categorical cross entropy, at no cost in inference and almost no additional cost in training. In the Markov chain Zi ← X → Zj , the chain rules provides: I(Zi;Zj Y ) ≤ I(X;ZiY ). More generally, I(X;ZiY ) upper bounds higher order interactions such as third order I(Zi;Zj , ZkY ). In conclusion, VCEB reduces an upper bound of higher order interactions with quite a simple variational approximation.\nSecond Order In this paper, we directly target the secondorder interaction I(Zi;Zj Y ) through a more complex adversarial training. We increase diversity and performances by remove spurious correlations shared by Zi and Zj that would otherwise cause simultaneous errors.\nHigher Order interactions include the third order I(Zi;Zj , ZkY ), the fourth order I(Zi;Zj , Zk, ZlY ), etc, up to the M th order. They capture more complex correlations among features. For example, Zj alone (and Zk alone) could be unable to predict Zi, while they [Zj , Zk] could together. However we only consider first and second order interactions in the current submission. It is common practice, for example in the feature selection literature (Battiti, 1994; Fleuret, 2004; Brown, 2009; Peng et al., 2005). The main reason to truncate higher order interactions is computational, as the number of components would grow exponentially and add significant additional cost in training. Another reason is empirical, the additional hyperparameters may be hard to calibrate. But these higher order interactions could be approximated through neural estimations like the second order. For example, for the third order, features Zi, Zj and Zk could be given simultaneously to the discriminator w. The complete analysis of these higher order interactions has huge potential and could lead to a future research project."
},
{
"heading": "F.2 LEARNING FEATURES INDEPENDENCE WITHOUT COMPRESSION",
"text": "The question is whether we could learn deterministic encoders with second order I(Zi;Zj Y ) regularization without tackling first order I(X;ZiY ). We summarized several approaches in Table 11.\nFirst Approach Without Sampling Deterministic encoders predict deterministic points in the features space. Feeding the discriminator w with deterministic triples without sampling increases diversity and reaches 77.09, compared to 76.78 for independent deterministic. Compared to DICE, w’s task has been simplified: indeed, w tries to separate the joint and the product deterministic distributions that may not overlap anymore. This violates convergence conditions, destabilizes overall adversarial training and the equilibrium between the encoders and the discriminator.\nSampling and Reparameterization Trick To make the joint and product distributions overlap over the features space, we apply the reparametrization trick on features with variance 1. This second approach is similar to instance noise (Sønderby et al., 2016), which tackled the instability of adversarial training. We reached 77.33 by protecting individual accuracies.\nSynergy between CEB and CR In comparison, we obtain 77.51 with DICE. In addition to theoretical motivations, VCEB and CR work empirically in synergy. First, the adversarial learning is simplified and only focuses on spurious correlations VCEB has not already deleted. Thus it may explain the improved stability related to the value of δcr and the reduction in standard deviations in performances. Second, VCEB learns a Gaussian distribution; a mean but also an inputdependant covariance eσi (x). This covariance fits the uncertainty of a given sample: in a similar context, Yu et al. (2019) has shown that large covariances were given for difficult samples. Sampling from this inputdependant covariance performs better than using an arbitrary fixed variance shared by all dimensions from all extracted features from all samples, from 77.29 to 77.51.\nConclusion DICE benefits from both components: learning redundancy along with VCEB improves results, at almost no extra cost. We think CR can definitely be applied with deterministic encoders as long as the inputs of the discriminator are sampled from overlapping distributions in the features space. Future work could study new methods to select the variance in sampling. As compression losses yield additional hyperparameters and may underperform for some architectures/datasets, learning only the conditional redundancy (without compression) could increase the applicability of our contributions.\nG IMPACT OF THE SECOND TERM IN THE NEURAL ESTIMATION OF CONDITIONAL REDUNDANCY"
},
{
"heading": "G.1 CONDITIONAL REDUNDANCY IN TWO COMPONENTS",
"text": "The conditional redundancy can be estimated by the difference between two components:\nÎCRDV = 1 B ∑\n(z1,z2,y)∈B log f(z1, z2, y)︸ ︷︷ ︸ Diversity − log\n 1 Bp ∑ (z1,z′2,y)∈Bp f(z1, z ′ 2, y)︸ ︷︷ ︸\nFake correlations , (11) with f = w1−w . In this paper, we focused only on the left hand side (LHS) component from equation 11 which leads to L̂CRDV in equation 4. We showed empirically that it improves ensemble diversity and overall performances. LHS forces features extracted from the same input to be unpredictable from each other; to simulate that they have been extracted from two different images.\nNow we investigate the impact of the right hand side (RHS) component from equation 11. We conjecture that RHS forces features extracted from two different inputs from the same class to create fake correlations, to simulate that they have been extracted from the same image. Overall, the RHS would correlate members and decrease diversity in our ensemble."
},
{
"heading": "G.2 EXPERIMENTS",
"text": "These intuitions are confirmed by experiments with a 4branches ResNet32 on CIFAR100, which are illustrated in Figure 12. Training only with the RHS and removing the LHS (the opposite of what is done in DICE) reduces diversity compared to CEB. Moreover, keeping both the LHS and the RHS leads to slightly reduced diversity and ensemble accuracy compared to DICE. We obtained 77.40± 0.19 with LHS+RHS instead of 77.51± 0.17 with only the LHS. In conclusion, dropping the RHS performs better while reducing the training cost."
},
{
"heading": "H SOCIOLOGICAL ANALOGY",
"text": "We showed that increasing diversity in features while encouraging the different learners to agree improves performance for neural networks: the optimal diversityaccuracy tradeoff was obtained with a large diversity. To finish, we make a short analogy with the importance of diversity in our society. Decisionmaking in group is better than individual decision as long as the members do not belong to the same cluster. Homogenization of the decision makers increases vulnerability to failures, whereas diversity of backgrounds sparks new discoveries (Muldoon, 2016): ideas should be shared and debated among members reflecting the diversity of the society’s various components. Academia especially needs this diversity to promote trust in research (SierraMercado & LázaroMuñoz, 2018), to improve quality of the findings (Swartz et al., 2019), productivity of the teams (Vasilescu et al., 2015) and even schooling’s impact (Bowman, 2013)."
},
{
"heading": "I LEARNING STRATEGY OVERVIEW",
"text": "We provide in Figure 13 a zoomed version of our learning strategy."
},
{
"heading": "J MAIN TABLE",
"text": "Table 12 unifies our main results on CIFAR100 from Table 1 and CIFAR10 from Table 2.\nFi gu\nre 13\n: L\nea rn\nin g\nst ra\nte gy\nov er\nvi ew\n. B\nlu e\nar ro\nw s\nre pr\nes en\nt tr\nai ni\nng cr\nite ri\na: (1\n) cl\nas si\nfic at\nio n\nw ith\nco nd\niti on\nal en\ntr op\ny bo\nttl en\nec k\nap pl\nie d\nse pa\nra te\nly on\nm em\nbe rs\n1 an\nd 2,\nan d\n(2 )\nad ve\nrs ar\nia l\ntr ai\nni ng\nto de\nle te\nsp ur\nio us\nco rr\nel at\nio ns\nbe tw\nee n\nm em\nbe rs\nan d\nin cr\nea se\ndi ve\nrs ity\n. X\nan d X ′\nbe lo\nng to\nth e\nsa m\ne Y\nfo r\nco nd\niti on\nal re\ndu nd\nan cy\nm in\nim iz\nat io\nn.\nTa bl\ne 12\n:E ns\nem bl\ne cl\nas si\nfic at\nio n\nac cu\nra cy\n(T op\n1 ,%\n).\nM et\nho d\nC IF\nA R\n1 00\nC IF\nA R\n1 0\nN am\ne C\nom po\nne nt\ns R\nes N\net 3\n2 R\nes N\net 1\n10 W\nR N\n2 8\n2 R\nes N\net 3\n2 R\nes N\net 1 10 D iv . I.B . 3 br an ch 4 br an ch 5 br an ch 4 ne t 3 br an ch 4 br an ch 3 br an ch 4 br an ch 3 ne t 4 br an ch 3 br an ch\nIn d.\n76 .2\n8± 0.\n12 76\n.7 8±\n0. 19\n77 .2\n4± 0.\n25 77\n.3 8±\n0. 12\n80 .5\n4± 0.\n09 80\n.8 9±\n0. 31\n78 .8\n3± 0.\n12 79\n.1 0±\n0. 08\n80 .0\n1± 0.\n15 94\n.7 5±\n0. 08\n95 .6\n2± 0.\n06\nO N\nE (L\nan et\nal .,\n20 18\n) 75\n.1 7±\n0. 35\n75 .1\n3± 0.\n25 75\n.2 5±\n0. 22\n76 .2\n5± 0.\n32 78\n.9 7±\n0. 24\n79 .8\n6± 0.\n25 78\n.3 8±\n0. 45\n78 .4\n7± 0.\n32 77\n.5 3±\n0. 36\n94 .4\n1± 0.\n05 95\n.2 5±\n0. 08\nO K\nD D\nip (C\nhe n\net al\n., 20\n20 b)\n75 .3\n7± 0.\n32 76\n.8 5±\n0. 25\n76 .9\n5± 0.\n18 77\n.2 7±\n0. 31\n79 .0\n7± 0.\n27 80\n.4 6±\n0. 35\n79 .0\n1± 0.\n19 79\n.3 2±\n0. 17\n80 .0\n2± 0.\n14 94\n.8 6±\n0. 08\n95 .2\n1± 0.\n09\nA D\nP (P\nan g\net al\n., 20\n19 )\nPr ed\n. 76\n.3 7±\n0. 11\n77 .2\n1± 0.\n21 77\n.6 7±\n0. 25\n77 .5\n1± 0.\n25 80\n.7 3±\n0. 38\n81 .4\n0± 0.\n27 79\n.2 1±\n0. 19\n79 .7\n1± 0.\n18 80\n.0 1±\n0. 17\n94 .9\n2± 0.\n04 95\n.4 3±\n0. 12\nIB (e\nqu at\nio n\n8) V\nIB 76\n.0 1±\n0. 12\n76 .9\n3± 0.\n24 77\n.2 2±\n0. 19\n77 .7\n2± 0.\n12 80\n.4 3±\n0. 34\n81 .1\n2± 0.\n19 79\n.1 9±\n0. 35\n79 .1\n5± 0.\n12 80\n.1 5±\n0. 13\n94 .7\n6± 0.\n12 94\n.5 4±\n0. 07\nC E\nB (e\nqu at\nio n\n2) V\nC E\nB 76\n.3 6±\n0. 06\n76 .9\n8± 0.\n18 77\n.3 5±\n0. 14\n77 .6\n4± 0.\n15 81\n.0 8±\n0. 12\n81 .1\n7± 0.\n16 78\n.9 2±\n0. 08\n79 .2\n0± 0.\n13 80\n.3 8±\n0. 18\n94 .9\n3± 0.\n11 94\n.6 5±\n0. 05\nIB R\n(e qu\nat io\nn 9)\nR V\nIB 76\n.6 8±\n0. 13\n77 .2\n5± 0.\n13 77\n.7 7±\n0. 21\n77 .8\n4± 0.\n12 81\n.3 4±\n0. 21\n81 .3\n8± 0.\n08 79\n.3 3±\n0. 15\n79 .9\n0± 0.\n10 80\n.2 2±\n0. 10\n94 .9\n1± 0.\n14 95\n.6 8±\n0. 05\nC E\nB R\n(e qu\nat io\nn 10\n) R\nV C\nE B\n76 .7\n2± 0.\n08 77\n.3 0±\n0. 12\n77 .8\n1± 0.\n10 77\n.8 2±\n0. 11\n81 .5\n2± 0.\n11 81\n.5 5±\n0. 33\n79 .2\n5± 0.\n15 79\n.9 8±\n0. 07\n80 .3\n5± 0.\n15 94\n.9 4±\n0. 12\n95 .6\n7± 0.\n06\nD IC\nE (e\nqu at\nio n\n6) C\nR V\nC E\nB 76\n.8 9±\n0. 09\n77 .5\n1± 0.\n17 78\n.0 8±\n0. 18\n77 .9\n2± 0.\n08 81\n.6 7±\n0. 14\n81 .9\n3± 0.\n13 79\n.5 9±\n0. 13\n80 .0\n5± 0.\n11 80\n.5 5±\n0. 12\n95 .0\n1± 0.\n09 95\n.7 4±\n0. 08"
}
]
 2,021
 
SP:5561773ab024b083be4e362db079e371abf79653
 [
"The paper proposed a new training framework, namely GSL, for novel content synthesis. And GSL enables learning of disentangled representations of tangible attributes and achieve novel image synthesis by recombining those swappable components under a zeroshot setting. The framework leverages the underlying semantic links across samples which could be instantiated as a multigraph. Cycleconsistent reconstruction loss as well as reconstruction loss are computed on synthetic samples from swapped latent representations."
]
 Visual cognition of primates is superior to that of artificial neural networks in its ability to “envision” a visual object, even a newlyintroduced one, in different attributes including pose, position, color, texture, etc. To aid neural networks to envision objects with different attributes, we propose a family of objective functions, expressed on groups of examples, as a novel learning framework that we term GroupSupervised Learning (GSL). GSL allows us to decompose inputs into a disentangled representation with swappable components, that can be recombined to synthesize new samples. For instance, images of red boats & blue cars can be decomposed and recombined to synthesize novel images of red cars. We propose an implementation based on autoencoder, termed groupsupervised zeroshot synthesis network (GZSNet) trained with our learning framework, that can produce a highquality red car even if no such example is witnessed during training. We test our model and learning framework on existing benchmarks, in addition to a new dataset that we opensource. We qualitatively and quantitatively demonstrate that GZSNet trained with GSL outperforms stateoftheart methods.
 [
{
"affiliations": [],
"name": "Yunhao Ge"
},
{
"affiliations": [],
"name": "Sami AbuElHaija"
},
{
"affiliations": [],
"name": "Gan Xin"
},
{
"affiliations": [],
"name": "Laurent Itti"
}
]
 [
{
"authors": [
"Yuval Atzmon",
"Gal Chechik"
],
"title": "Probabilistic andor attribute grouping for zeroshot learning",
"venue": "In Uncertainty in Artificial Intelligence,",
"year": 2018
},
{
"authors": [
"A. Borji",
"S. Izadi",
"L. Itti"
],
"title": "ilab20m: A largescale controlled object dataset to investigate deep learning",
"venue": "In IEEE Conference on Computer Vision and Pattern Recognition",
"year": 2016
},
{
"authors": [
"Christopher P Burgess",
"Irina Higgins",
"Arka Pal",
"Loic Matthey",
"Nick Watters",
"Guillaume Desjardins",
"Alexander Lerchner"
],
"title": "Understanding disentangling in betavae",
"venue": "arXiv preprint arXiv:1804.03599,",
"year": 2018
},
{
"authors": [
"Ricky T.Q. Chen",
"Xuechen Li",
"Roger B Grosse",
"David K Duvenaud"
],
"title": "Isolating sources of disentanglement in variational autoencoders",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"Yunjey Choi",
"Minje Choi",
"Munyoung Kim",
"JungWoo Ha",
"Sunghun Kim",
"Jaegul Choo"
],
"title": "Stargan: Unified generative adversarial networks for multidomain imagetoimage translation",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2018
},
{
"authors": [
"Yunhao Ge",
"Jiaping Zhao",
"Laurent Itti"
],
"title": "Pose augmentation: Classagnostic object pose transformation for object recognition",
"venue": "In European Conference on Computer Vision,",
"year": 2020
},
{
"authors": [
"Spyros Gidaris",
"Praveer Singh",
"Nikos Komodakis"
],
"title": "Unsupervised representation learning by predicting image rotations",
"venue": "In International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Justin Gilmer",
"Samuel S. Schoenholz",
"Patrick F. Riley",
"Oriol Vinyals",
"George E. Dahl"
],
"title": "Neural message passing for quantum chemistry",
"venue": null,
"year": 2017
},
{
"authors": [
"Ian Goodfellow",
"Jean PougetAbadie",
"Mehdi Mirza",
"Bing Xu",
"David WardeFarley",
"Sherjil Ozair",
"Aaron Courville",
"Yoshua Bengio"
],
"title": "Generative adversarial networks",
"venue": "In Neural Information Processing Systems,",
"year": 2014
},
{
"authors": [
"I. Higgins",
"L. Matthey",
"A. Pal",
"C. Burgess",
"X. Glorot",
"M. Botvinick",
"S. Mohamed",
"A. Lerchner"
],
"title": "βvae: Learning basic visual concepts with a constrained variational framework",
"venue": "In International Conference on Learning Representations,",
"year": 2017
},
{
"authors": [
"Seunghoon Hong",
"Dingdong Yang",
"Jongwook Choi",
"Honglak Lee"
],
"title": "Inferring semantic layout for hierarchical texttoimage synthesis",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2018
},
{
"authors": [
"Hyunjik Kim",
"Andriy Mnih"
],
"title": "Disentangling by factorising",
"venue": "arXiv preprint arXiv:1802.05983,",
"year": 2018
},
{
"authors": [
"Diederik P. Kingma",
"Max Welling"
],
"title": "Autoencoding variational bayes",
"venue": "In International Conference on Learning Representations,",
"year": 2014
},
{
"authors": [
"Kevin Lai",
"Liefeng Bo",
"Xiaofeng Ren",
"Dieter Fox"
],
"title": "A largescale hierarchical multiview rgbd object dataset",
"venue": "In 2011 IEEE international conference on robotics and automation,",
"year": 2011
},
{
"authors": [
"C.H. Lampert"
],
"title": "Learning to detect unseen object classes by betweenclass attribute transfer",
"venue": "In IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2009
},
{
"authors": [
"Oliver Langner",
"Ron Dotsch",
"Gijsbert Bijlstra",
"Daniel HJ Wigboldus",
"Skyler T Hawk",
"AD Van Knippenberg"
],
"title": "Presentation and validation of the radboud faces database",
"venue": "Cognition and emotion,",
"year": 2010
},
{
"authors": [
"Nikos K. Logothetis",
"Jon Pauls",
"Tomaso Poggiot"
],
"title": "Shape representation in the inferior temporal cortex of monkeys",
"venue": "In Current Biology,",
"year": 1995
},
{
"authors": [
"Loic Matthey",
"Irina Higgins",
"Demis Hassabis",
"Alexander Lerchner"
],
"title": "dsprites: Disentanglement testing sprites dataset",
"venue": null,
"year": 2017
},
{
"authors": [
"Mehdi Mirza",
"Simon Osindero"
],
"title": "Conditional generative adversarial nets",
"venue": "arXiv preprint arXiv:1411.1784,",
"year": 2014
},
{
"authors": [
"Franco Scarselli",
"Marco Gori",
"Ah Chung Tsoi",
"Markus Hagenbuchner",
"Gabriele Monfardini"
],
"title": "The graph neural network model",
"venue": "IEEE Transactions on Neural Networks,",
"year": 2009
},
{
"authors": [
"Luan Tran",
"Xi Yin",
"Xiaoming Liu"
],
"title": "Disentangled representation learning gan for poseinvariant face recognition",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,",
"year": 2017
},
{
"authors": [
"Taihong Xiao",
"Jiapeng Hong",
"Jinwen Ma"
],
"title": "Elegant: Exchanging latent encodings with gan for transferring multiple face attributes",
"venue": "In Proceedings of the European Conference on Computer Vision (ECCV),",
"year": 2018
},
{
"authors": [
"Zhuoqian Yang",
"Wentao Zhu",
"Wayne Wu",
"Chen Qian",
"Qiang Zhou",
"Bolei Zhou",
"Chen Change Loy"
],
"title": "Transmomo: Invariancedriven unsupervised video motion retargeting",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,",
"year": 2020
},
{
"authors": [
"Han Zhang",
"Tao Xu",
"Hongsheng Li",
"Shaoting Zhang",
"Xiaogang Wang",
"Xiaolei Huang",
"Dimitris N Metaxas"
],
"title": "Stackgan: Text to photorealistic image synthesis with stacked generative adversarial networks",
"venue": "In Proceedings of the IEEE international conference on computer vision,",
"year": 2017
},
{
"authors": [
"JunYan Zhu",
"Taesung Park",
"Phillip Isola",
"Alexei A Efros"
],
"title": "Unpaired imagetoimage translation using cycleconsistent adversarial networks",
"venue": "In International Conference on Computer Vision",
"year": 2017
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Primates perform well at generalization tasks. If presented with a single visual instance of an object, they often immediately can generalize and envision the object in different attributes, e.g., in different 3D pose (Logothetis et al., 1995). Primates can readily do so, as their previous knowledge allows them to be cognizant of attributes. Machines, by contrast, are mostcommonly trained on sample features (e.g., pixels), not taking into consideration attributes that gave rise to those features.\nTo aid machine cognition of visual object attributes, a class of algorithms focuses on learning disentangled representations (Kingma & Welling, 2014; Higgins et al., 2017; Burgess et al., 2018; Kim & Mnih, 2018; Chen et al., 2018), which map visual samples onto a latent space that separates the information belonging to different attributes. These methods show disentanglement by interpolating between attribute values (e.g., interpolate pose, etc). However, these methods usually process one sample at a time, rather than contrasting or reasoning about a group of samples. We posit that semantic links across samples could lead to better learning.\nWe are motivated by the visual generalization of primates. We seek a method that can synthesize realistic images for arbitrary queries (e.g., a particular car, in a given pose, on a given background), which we refer to as controlled synthesis. We design a method that enforces semantic consistency of attributes, facilitating controlled synthesis by leveraging semantic links between samples. Our method maps samples onto a disentangled latent representation space that (i) consists of subspaces, each encoding one attribute (e.g., identity, pose, ...), and, (ii) is such that two visual samples that share an attribute value (e.g., both have identity “car”) have identical latent values in the shared attribute subspace (identity), even if other attribute values (e.g., pose) differ. To achieve this, we propose a general learning framework: Group Supervised Learning (GSL, Sec. 3), which provides a learner (e.g., neural network) with groups of semanticallyrelated training examples, represented as multigraph. Given a query of attributes, GSL proposes groups of training examples with attribute combinations that are useful for synthesizing a test example satisfying the query (Fig. 1). This endows the network with an envisioning capability. In addition to applications in graphics, controlled synthesis can also augment training sets for better generalization on machine learning tasks (Sec. 6.3).\nAs an instantiation of GSL, we propose an encoderdecoder network for zeroshot synthesis: GroupSupervised ZeroShot Synthesis Network (GZSNet, Sec. 4). While learning (Sec. 4.2 & 4.3), we repeatedly draw a group of semanticallyrelated examples, as informed by a multigraph created by GSL. GZSNet encodes group examples, to obtain latent vectors, then swaps entries for one or more attributes in the latent space across examples, through multigraph edges, then decodes into an example within the group (Sec. 4.2).\nOur contributions are: (i) We propose GroupSupervised Learning (GSL), explain how it casts its admissible datasets into a multigraph, and show how it can be used to express learning from semanticallyrelated groups and to synthesize samples with controllable attributes; (ii) We show one instantiation of GSL: Groupsupervised Zeroshot Synthesis Network (GZSNet), trained on groups of examples and reconstruction objectives; (iii) We demonstrate that GZSNet trained with GSL outperforms stateoftheart alternatives for controllable image synthesis on existing datasets; (iv) We provide a new dataset, Fonts1, with its generating code. It contains 1.56 million images and their attributes. Its simplicity allows rapid idea prototyping for learning disentangled representations."
},
{
"heading": "2 RELATED WORK",
"text": "We review research areas, that share similarities with our work, to position our contribution.\nSelfSupervised Learning (e.g., Gidaris et al. (2018)) admits a dataset containing features of training samples (e.g., upright images) and maps it onto an auxiliary task (e.g., rotated images): dataset examples are drawn and a random transformation (e.g., rotate 90◦) is applied to each. The task could be to predict the transformation (e.g., =90◦) from the transformed features (e.g., rotated image). Our approach is similar, in that it also creates auxiliary tasks, however, the tasks we create involve semanticallyrelated group of examples, rather than from one example at a time.\nDisentangled Representation Learning are methods that infer latent factors given example visible features, under a generative assumption that each latent factor is responsible for generating one semantic attribute (e.g. color). Following Variational Autoencoders (VAEs, Kingma & Welling, 2014), a class of models (including, Higgins et al., 2017; Chen et al., 2018) achieve disentanglement implicitly, by incorporating into the objective, a distance measure e.g. KLdivergence, encouraging the latent factors to be statisticallyindependent. While these methods can disentangle the factors without knowing them beforehand, unfortunately, they are unable to generate novel combinations not witnessed during training (e.g., generating images of red car, without any in training). On the other hand, our method requires knowing the semantic relationships between samples (e.g., which objects are of same identity and/or color), but can then synthesize novel combinations (e.g., by stitching latent features of “any car” plus “any red object”).\nConditional synthesis methods can synthesize a sample (e.g., image) and some use information external to the synthesized modalities, e.g., natural language sentence Zhang et al. (2017); Hong et al.\n1http://ilab.usc.edu/datasets/fonts\n(2018) or class label Mirza & Osindero (2014); Tran et al. (2017). Ours differ, in that our “external information” takes the form of semantic relationships between samples. There are methods based on GAN Goodfellow et al. (2014) that also utilize semantic relationships including Motion Retargeting (Yang et al., 2020), which unfortunately requires domainspecific handengineering (detect and track human body parts). On the other hand, we design and apply our method on different tasks (including people faces, vehicles, fonts; see Fig. 1). Further, we compare against two recent GAN methods starGAN (Choi et al., 2018) and ELEGANT (Xiao et al., 2018), as they are stateoftheart GAN methods for amending visual attributes onto images. While they are powerful in carrying local image transformations (within a small patch, e.g., changing skin tone or hair texture). However, our method better maintains global information: when rotating the main object, the scene also rotates with it, in a semantically coherent manner. Importantly, our learning framework allows expressing simpler model network architectures, such as feedforward autoencoders, trained with only reconstruction objectives, as opposed to GANs, with potential difficulties such as lack of convergence guarantees.\nZeroshot learning also consumes sideinformation. For instance, models of Lampert (2009); Atzmon & Chechik (2018) learn from object attributes, like our method. However, (i) these models are supervised to accurately predict attributes, (ii) they train and infer one example at a time, and (iii) they are concerned with classifying unseen objects. We differ in that (i) no learning gradients (supervision signal) are derived from the attributes, as (ii) these attributes are used to group the examples (based on shared attribute values), and (iii) we are concerned with generation rather than classification: we want to synthesize an object in previouslyunseen attribute combinations.\nGraph Neural Networks (GNNs) (Scarselli et al., 2009) are a class of models described on graph structured data. This is applicable to our method, as we propose to create a multigraph connecting training samples. In fact, our method can be described as a GNN, with message passing functions (Gilmer et al., 2017) that are aware of the latent space partitioning per attribute (explained in Sec. 4). Nonetheless, for selfcontainment, we introduce our method in the absence of the GNN framework."
},
{
"heading": "3 GROUPSUPERVISED LEARNING",
"text": ""
},
{
"heading": "3.1 DATASETS ADMISSIBLE BY GSL",
"text": "Formally, a dataset admissible by GSL containing n samples D = {x(i)}ni=1 where each example is accompanied with m attributes Da = {(a(i)1 , a (i) 2 , . . . a (i) m )}ni=1. Each attribute value is a member of a countable set: aj ∈ Aj . For instance, pertaining to visual scenes, A1 can denote foregroundcolors A1 = {red, yellow, . . . }, A2 could denote background colors, A3 could correspond to foreground identity, A4 to (quantized) orientation. Such datasets have appeared in literature, e.g. in Borji et al. (2016); Matthey et al. (2017); Langner et al. (2010); Lai et al. (2011)."
},
{
"heading": "3.2 AUXILIARY TASKS VIA MULTIGRAPHS",
"text": "Given a dataset of n samples and their attributes, we define a multigraph M with node set [1..n]. Two nodes, i, k ∈ [1..n] with i 6= k are connected with edge labels M(i, k) ⊆ [1..m] as:\nM(i, k) = { j ∣∣∣ a(i)j = a(k)j ; j ∈ [1..m]} .\nIn particular, M defines a multigraph, with M(i, k) denoting the number of edges connecting nodes i and k, which is equals the number of their shared attributes. Fig. 2 depicts a (sub)multigraph for the Fonts dataset (Sec. 5.1).\nDefinition 1 COVER(S, i): Given node set S ⊆ [1..Dg] and node i ∈ [1..Dg] we say set S covers node i if every attribute value of i is in at least one member of S. Formally:\nCOVER(S, i)⇐⇒ [1..m] = ⋃ k∈S M(i, k). (1)\nWhen COVER(S, i) holds, there are two mutuallyexclusive cases: either i ∈ S, or i /∈ S, respectively shaded as green and blue in Fig. 2 (b). The first case trivially holds even for small S, e.g. COVER({i}, i) holds for all i. However, we are interested in nontrivial sets where S > 1, as sets with S = 1 would cast our proposed network (Sec. 4) to a standard AutoEncoder. The second case\nis crucial for zeroshot synthesis. Suppose the (image) features of node i (in Fig. 2 (b)) are not given, we can search for S1, under the assumption that if COVER(S1, i) holds, then S1 contains sufficient information to synthesize i’s features as they are not given (i /∈ S1). Until this point, we made no assumptions how the pairs (S, i) are extracted (mined) from the multigraph s.t. COVER(S, i) holds. In the sequel, we train with S = 2 and i ∈ S. We find that this particular specialization of GSL is easy to program, and we leaveout analyzing the impact of mining different kinds of cover sets for future work."
},
{
"heading": "4 GROUPSUPERVISED ZEROSHOT SYNTHESIS NETWORK (GZSNET)",
"text": "We now describe our ingredients towards our goal: synthesize holisticallysemantic novel images."
},
{
"heading": "4.1 AUTOENCODING ALONG RELATIONS IN M",
"text": "Autoencoders (D ◦ E) : X → X are composed of an encoder network E : X → Rd and a decoder network D : Rd → X . Our networks further utilize M emitted by GSL. GZSNet consists of\nan encoder E : X ×M→ Rd ×M ; and a decoder D : Rd ×M→ X . (2)\nM denotes the space of sample pairwiserelationships. GSL realizes such (X,M) ⊂ X ×M, where X contains (a batch of) training samples and M the (sub)graph of their pairwise relations. Rather than passing asis the output ofE intoD, one can modify it using algorithmA by chaining: D◦A◦E. For notation brevity, we fold A into the encoder E, by designing a swap operation, next.\n4.2 DISENTANGLEMENT BY SWAP OPERATION\nWhile training our autoencoder D(E(X,M)), we wish to disentangle the latents output by E, to provide use for using D to decode samples not given to E. D (/ E) outputs (/ inputs) one or more images, onto (/ from) the image space. Both networks can access feature and relationship information.\nAt a high level, GZSNet aims to swap attributes across images by swapping corresponding entries across their latent representations. Before any training, we fix partitioning of the the latent space Z = E(X,M). Let rowvector z(1) = [g(1)1 , g (1) 2 , . . . , g (1) m ] be the concatenation of m row vectors\n{g(1)j ∈ Rdj}mj=1 where d = ∑m j=1 dj and the values of {dj}mj=1 are hyperparameters.\nTo simplify the notation to follow, we define an operation swap : Rd × Rd × [1..m] → Rd × Rd, which accepts two latent vectors (e.g., z(1) and z(2)) and an attribute (e.g., 2) and returns the input vectors except that the latent features corresponding to the attribute are swapped. E.g.,\nswap(z(1), z(2), 2) = swap([g(1)1 , g (1) 2 , g (1) 3 , . . . , g (1) m ], [g (2) 1 , g (2) 2 , g (2) 3 , . . . , g (2) m ], 2)\n= [g (1) 1 , g (2) 2 , g (1) 3 , . . . , g (1) m ], [g (2) 1 , g (1) 2 , g (2) 3 , . . . , g (2) m ]\nOneOverlap Attribute Swap. To encourage disentanglement in the latent representation of attributes, we consider group S and example x s.t. COVER(S, x) holds, and for all xo ∈ S, x 6= xo, the\npair (xo, x) share exactly one attribute value (M(xo, x) = 1). Encoding those pairs, swapping the latent representation of the attribute, and decoding should then be a noop if the swap did not affect other attributes (Fig. 3b). Specifically, we would like for a pair of examples, x (red border in Fig. 3b) and xo (blue border) sharing only attribute j (e.g., identity)2, with z = E(x) and zo = E(xo), be s.t.\nD (zs) ≈ x and D ( z(o)s ) ≈ x(o); with zs, z(o)s = swap(z, zo, j). (3)\nIf, for each attribute, sufficient sample pairs share only that attribute, and Eq. 3 holds for all with zero residual loss, then disentanglement is achieved for that attribute (on the training set).\nCycle Attribute Swap. This operates on all example pairs, regardless of whether they share an attribute or not. Given two examples and their corresponding latent vectors, if we swap latent information corresponding to any attribute, we should end up with a sensible decoding. However, we may not have groundtruth supervision samples for swapping all attributes of all pairs. For instance, when swapping the color attribute between pair orange truck and white airplane, we would like to learn from this pair, even without any orange airplanes in the dataset. To train from any pair, we are motivated to follow a recipe similar to CycleGAN (Zhu et al., 2017). As shown in Fig. 3c, given two examples x and x̄: (i) sample an attribute j ∼ U [1..m]; (ii) encode both examples, z = E(x) and z̄ = E(x̄); (iii) swap features corresponding to attribute j with zs, z̄s = swap(z, z̄, j); (iv) decode, x̂ = D(zs) and ̂̄x = D(z̄s); (v) on a second round (hence, cycle), encode again as ẑ = E(x̂) and̂̄z = E(̂̄x); (vi) another swap, which should reverse the first swap, ẑs, ̂̄zs = swap(ẑ, ̂̄z, j); (vii) finally, one last decoding which should approximately recover the original input pair, such that:\nD (ẑs) ≈ x and D (̂̄zs) ≈ x̄; (4)\nIf, after the two encodeswapdecode, we are able to recover the input images, regardless of which attribute we sample, this implies that swapping one attribute does not destroy latent information for other attributes. As shown in Sec. 5, this can be seen as a data augmentation, growing the effective training set size by adding all possible new attribute combinations not already in the training set.\n2It holds that COVER({x, xo}, x) and COVER({x, xo}, xo)\nAlgorithm 1: Training Regime; for sampling data and calculating loss terms Input: Dataset D and Multigraph M Output: Lr, Lsr, Lcsr\n1 Sample x ∈ D, S ⊂ D such that COVER(S, x) and S = m and ∀k ∈ S, M(x, k) = 1 2 for x(o) ∈ S do 3 z ← E(x); z(o) ← E(x(o)); ( zs, z (o) s ) ← swap(z, z(o), j)\n4 Lsr ← Lsr + D (zs)− xl1 + ∣∣∣∣∣∣D (z(o)s )− x(o)∣∣∣∣∣∣\nl1 # Swap reconstruction loss\n5 x̄ ∼ D and j ∼ U [1..m] # Sample for Cycle swap 6 z ← E(x); z̄ ← E(x̄); (zs, z̄s)← swap(z, z̄, j); x̂← D(zs); ̂̄x← D(z̄s) 7 ẑ ← E(x̂); ̂̄z ← E(̂̄x); (ẑs, ̂̄zs)← swap(ẑ, ̂̄z, j) 8 Lcsr ← D (ẑs)− xl1 +\n∣∣∣∣D (̂̄zs)− x̄∣∣∣∣l1 # Cycle reconstruction loss 9 Lr ← D (E(x))− xl1 # Standard reconstruction loss"
},
{
"heading": "4.3 TRAINING AND OPTIMIZATION",
"text": "Algorithm 1 lists our sampling strategy and calculates loss terms, which we combine into a total loss\nL(E,D;D,M) = Lr + λsrLsr + λcsrLcsr, (5) where Lr, Lsr and Lcsr, respectively are the reconstruction, swapreconstruction, and cycle construction losses. Scalar coefficients λsr, λcsr > 0 control the relative importance of the loss terms. The total loss L can be minimized w.r.t. parameters of encoder (E) and decoder (D) via gradient descent."
},
{
"heading": "5 QUALITATIVE EXPERIMENTS",
"text": "We qualitatively evaluate our method on zeroshot synthesis tasks, and on its ability to learn disentangled representations, on existing datasets (Sec. 5.2), and on a dataset we contribute (Sec. 5.1).\nGZSNet architecture. For all experiments, the encoder E is composed of two convolutional layers with stride 2, followed by 3 residual blocks, followed by a convolutional layer with stride 2, followed by reshaping the response map to a vector, and finally two fullyconnected layers to output 100dim vector as latent feature. The decoder D mirrors the encoder, and is composed of two fullyconnected layers, followed by reshape into cuboid, followed by deconv layer with stride 2, followed by 3 residual blocks, then finally two deconv layers with stride 2, to output a synthesized image."
},
{
"heading": "5.1 FONTS DATASET & ZEROSHOT SYNTHESIS PERFORMANCE",
"text": "Design Choices. Fonts is a computergenerated image datasets. Each image is of an alphabet letter and is accompanied with its generating attributes: Letters (52 choices, of lower and uppercase English alphabet); size (3 choices); font colors (10); background colors (10); fonts (100); giving a total of 1.56 million images, each with size (128× 128) pixels. We propose this dataset to allow fast testing and idea iteration on zeroshot synthesis and disentangled representation learning. Samples from the dataset are shown in Fig. 2. Details and source code are in the Appendix. We partition the 100d latents equally among the 5 attributes. We use a train:test split of 75:25. Baselines. We train four baselines: • The first three are a standard Autoencoder, a βVAE (Higgins et al., 2017), and βTCVAE (Chen\net al., 2018). βVAE and βTCVAE show reasonable disentanglement on the dSprites dataset (Matthey et al., 2017). Yet, they do not make explicit the assignment between latent variables and attributes, which would have been useful for precisely controlling the attributes (e.g. color, orientation) of synthesized images. Therefore, for these methods, we designed a besteffort approach by exhaustively searching for the assignments. Once assignments are known, swapping attributes between images might become possible with these VAEs, and hopefully enabling for controllablesynthesis. We denote these three baselines with this Exhaustive Search, using suffix +ES. Details on Exhaustive Search are in the Appendix.\n• The fourth baseline, AE+DS, is an autoencoder where its latent space is partitioned and each partition receives direct supervision from one attribute. Further details are in the Appendix.\nAs shown in Fig. 4, our method outperforms baselines, with secondrunner being AE+DS: With discriminative supervision, the model focus on the most discriminative information, e.g., can distinguish e.g. across size, identity, etc, but can hardly synthesize photorealistic letters."
},
{
"heading": "5.2 ZEROSHOT SYNTHESIS ON ILAB20M AND RAFD",
"text": "iLab20M (Borji et al., 2016): is an attributed dataset containing images of toy vehicles placed on a turntable using 11 cameras at different viewing points. There are 3 attribute classes: vehicle identity: 15 categories, each having 25160 instances; pose; and backgrounds: over 14 for each identity: projecting vehicles in relevant contexts. Further details are in the Appendix. We partition the 100d latent space among attributes as: 60 for identity, 20 for pose, and 20 for background. iLab20M has limited attribute combinations (identity shows only in relevant background; e.g., cars on roads but not in deserts), GZSNet can disentangle these three attributes and reconstruct novel combinations (e.g., cars on desert backgrounds) Fig. 5 shows qualitative generation results.\nWe compare against (AE+DS), confirming that maintains discriminative information, and against two stateoftheart GAN baselines: starGAN (Choi et al., 2018) and ELEGANT (Xiao et al., 2018). GAN baselines are strong in knowing what to change but not necessarily how to change it: Where change is required, pixels are locally perturbed (within a patch) but the perturbations often lack global correctness (on the image). See Appendix for further details and experiments on these GAN methods.\nRaFD (Radboud Faces Database, Langner et al., 2010): contains pictures of 67 models displaying 8 emotional expressions taken by 5 different camera angles simultaneously. There are 3 attributes: identity, camera position (pose), and expression. We partition the 100d latent space among the attributes as 60 for identity, 20 for pose, and 20 for expression. We use a 80:20 split for train:test, and use GZSNet to synthesize images with novel combination of attributes (Fig. 6). The synthesized images can capture the corresponding attributes well, especially for pose and identity."
},
{
"heading": "6 QUANTITATIVE EXPERIMENTS",
"text": ""
},
{
"heading": "6.1 QUANTIFYING DISENTANGLEMENT THROUGH ATTRIBUTE COPREDICTION",
"text": "Can latent features of one attribute predict the attribute value? Can it also predict values for other attributes? Under perfect disentanglement, we should answer always for the first and never for the second. Our network did not receive attribute information through supervision, but rather, through swapping. We quantitatively assess disentanglement by calculating a modelbased confusion matrix between attributes: We analyze models trained on the Fonts dataset. We take the Test examples from Font, and split them 80:20 for trainDR:testDR. For each attribute pair j, r ∈ [1..m]× [1..m], we train a classifier (3 layer MLP) from gj of trainDR to the attribute values of r, then obtain the accuracy of each attribute by testing with gj of testDR. Table 1 compares how well features of each attribute (row) can predict an attribute value (column): perfect should be as close as possible to Identity matrix, with offdiagonal entries close to random (i.e., 1 / Ar). GZSNet outperforms other methods, except for (AE + DS) as its latent space was Directly Supervised for this particular task, though it shows inferior synthesis performance."
},
{
"heading": "6.2 DISTANCE OF SYNTHESIZED IMAGE TO GROUND TRUTH",
"text": "The construction of the Fonts dataset allows programmatic calculating groundtruth images corresponding to synthesized images (recall, Fig. 4). We measure how well do our generated images compare to the groundtruth test images. Table 2 shows image similarity metrics, averaged over the test set, comparing our method against baselines. Our method significantly outperforms baselines."
},
{
"heading": "6.3 GZSNET BOOST OBJECT RECOGNITION",
"text": "We showcase that our zeroshot synthesised images by GZSNet can augment and boost training of a visual object recognition classifier (Ge et al., 2020). Two different training datasets (Fig. 7a) are tailored from iLab20M, pose and background unbalanced datasets (DUB) (half classes with 6 poses per object instance, other half with only 2 poses; as we cut poses, some backgrounds are also\neliminated), as well as pose and background balanced dataset (DB) (all classes with all 6 poses per object instance).\nWe use GZSNet to synthesize the missing images ofDUB and synthesize a new (augmented) balanced dataset DBs. We alternatively use common data augmentation methods (random crop, horizontal flip, scale resize, etc) to augment theDUB dataset to the same number of images asDBs, calledDUBa. We show object recognition performance on the test set using these four datasets respectively. Comparing DBs withDUB shows≈ 7% points improvements on classification performance, due to augmentation with synthesized images for missing poses in the training set, reaching the level of when all real poses are available (DB). Our synthesized poses outperform traditional data augmentation (DUBa)"
},
{
"heading": "7 CONCLUSION",
"text": "We propose a new learning framework, Group Supervised Learning (GSL), which admits datasets of examples and their semantic relationships. It provides a learner groups of semanticallyrelated samples, which we show is powerful for zeroshot synthesis. In particular, our Groupsupervised ZeroShot synthesis network (GZSNet) is capable of training on groups of examples, and can learn disentangled representations by explicitly swapping latent features across training examples, along edges suggested by GSL. We show that, to synthesize samples given a query with custom attributes, it is sufficient to find one example per requested attribute and to combine them in the latent space. We hope that researchers find our learning framework useful and extend it for their applications."
},
{
"heading": "ACKNOWLEDGMENTS",
"text": "This work was supported by CBRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), the Army Research Office (W911NF2020053), and the Intel and CISCO Corporations. The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof."
},
{
"heading": "A FONTS DATASET",
"text": "Fonts is a computergenerated RGB image datasets. Each image, with 128× 128 pixels, contains an alphabet letter rendered using 5 independent generating attributes: letter identity, size, font color, background color and font. Fig.1 shows some samples: in each row, we keep all attributes values the same but vary one attribute value. Attribute details are shown in Table 1. The dataset contains all\npossible combinations of these attributes, totaling to 1560000 images. Generating attributes for all images are contained within the dataset. Our primary motive for creating the Fonts dataset, is that it allows fast testing and idea iteration, on disentangled representation learning and zeroshot synthesis.\nYou can download the dataset and its generating code from: http://ilab.usc.edu/ datasets/fonts , which we plan to keep uptodate with contributions from ourselves and the community."
},
{
"heading": "B BASELINES",
"text": "B.1 EXHAUSTIVE SEARCH (ES) AFTER TRAINING AUTOENCODER BASED METHODS\nAfter training the baselines: standard Autoencoder, a βVAE (Higgins et al., 2017), and TCVAE (Chen et al., 2018). We want to search for the assignment between latent variables and attributes, as these VAEs do not make explicit the assignment. This knowing the assignment should hypothetically allow us to trade attributes between two images by swapping feature values belonging to the attribute we desire to swap.\nTo discover the assignment from latent dimension to attribute, we map all n training images through the encoder, giving a 100D vector per training sample ∈ Rn×100. We make an 80:20 split on the vectors, obtaining XtrainES ∈ R0.8n×100 and XtestES ∈ R0.2n×100. Then, we randomly sample K different partitionings P of the 100D space evenly among the 5 attributes. For each partitioning p ∈ P , we create 5 classification tasks, one task per attribute, according to p:{( XtrainES [:, pj ] ∈ R0.8n×20, XtestES [:, pj ] ∈ R0.2n×20 )}5 j=1\n. For each task j, we train a 3layer MLP to map XtrainES [:, pj ] to their known attribute values and measure its performance on XtestES [:, pj ]. Finally, we commit to the partitioning p ∈ P with highest average performance on the 5 attribute tasks. This p represents our best effort to determine which latent feature dimensions correspond to which attributes. For zeroshot synthesis with baselines, we swap latent dimensions indicated by partitioning p. We denote three baselines with this Exhaustive Search, using suffix +ES (Fig. 4).\nB.2 DIRECT SUPERVISION (DS) ON AUTOENCODER LATENT SPACE\nThe last baseline (AE+DS) directly uses attribute labels to supervise the latent disentangled representation of the autoencoder by adding auxiliary classification modules. Specifically, the encoder maps an image sample x(i) to a 100d latent vector z(i) = E(x(i)), equally divided into 5 partitions corresponding to 5 attributes: z(i) = [g(i)1 , g (i) 2 , . . . , g (i) 5 ]. Each attribute partition has a attribute label, [y(i)1 , y (i) 2 , . . . , y (i) 5 ], which represent the attribute value (e.g. for font color attribute, the label represent different colors: red, green, blue,.etc). We use 5 auxiliary classification modules to predict the corresponding class label given each latent attribute partitions as input. We use Cross Entropy loss as the classification loss and the training goal is to minimize both the reconstruction loss and classification loss.\nAfter training, we have assignment between latent variables and attributes, so we can achieve attribute swapping and controlled synthesis (Fig. 4 (AE+DS)). The inferior synthesis performance demonstrates that: The supervision (classification task) preserves discriminative information that is insufficient for photorealistic generation. While our GZSNet uses one attribute swap and cross swap which enforce disentangled information to be sufficient for photorealistic synthesis.\nB.3 ELEGANT (XIAO ET AL., 2018)\nWe utilize the author’s opensourced code: https://github.com/Prinsphield/ELEGANT. For ELEGANT and starGAN (Section B.4), we want to synthesis a target image has same identity as id provider image, same background as background provider image, and same pose as pose provider image. To achieve this, we want to change the background and pose attribute of id image.\nAlthough ELEGANT is strong in making image transformations that are local to relativelysmall neighborhoods, however, it does not work well for our datasets, where imagewide transformations are required for meaningful synthesis. This can be confirmed by their model design: their final output is a pixelwise addition of a residual map, plus the input image. Further, ELEGANT treats all attribute values as binary: they represent each attribute value in a different part of the latent space, whereas our method devotes part of the latent space to represents all values for an attribute. For investigation, we train dozens of ELEGANT models with different hyperparameters, detailed as:\n• For iLab20M, the pose and background contain a total of 117 attribute values (6 for pose, 111 for background). As such, we tried training it on all attribute values (dividing their latent space among 117 attribute values). We note that this training regime was too slow and the loss values do not seem to change much during training, even with various learning rate choices (listed below).\n• To reduce the difficulty of the task for ELEGANT, we ran further experiments restricting attribute variation to only 17 attribute values (6 for pose, 11 for background) and this shows more qualitative promise than 117 attributes. This is what we report. • Fig 10 shows that ELEGANT finds more challenge in changing the pose than in changing the\nbackground. We now explain how we generated Columns 3 and 4 of Fig 10 for modifying the background. We modify the latent features for the identity image before decoding. Since the Identity input image and the Background input image have known but different background values, their background latent features are represented in two different latent spaces. One can swap on one or on both of these latent spaces. Column 3 and 4 of Fig.10 swap only on one latent space. However, in Fig. 5 of the main paper, we swap on both positions. We also show swapping only the pose attribute (across 2 latent spaces) in Column 1 of Fig.10 and swapping both pose and background in Column 2. • To investigate if the model’s performance is due to poor convergence of the generator, we\nqualitatively assess its performance on the training set. Fig. 11 shows output of ELEGANT on training samples. We see that the reconstruction (right) of input images (left) shows decent quality, suggesting that the generator network has converged to decently good parameters. Nonetheless, we see artefacts in its outputs when amending attributes, particularly located in pixel locations where a change is required. This shows that the model setup of ELEGANT is aware that these pixel values need to be updated, but the actual change is not coherent across the image. • For the above, we applied a generous sweep of training hyperparameters, including:\n– Learning rate: author’s original is 2e4, we tried several values between 1e5 and 1e3, including different rates for generator and discriminator.\n– Objective term coefficients: There are multiple loss terms for the generator, adversarial loss and reconstruction loss. We used a grid search method by multiplying the original parameters by a number from [0.2, 0.5, 2, 5] for each of the loss terms and tried several combinations.\n– The update frequency of weights on generator (G) and discriminator (D). Since D is easier to learn, we performing k update steps on G for every update step on D. We tried k = 5, 10, 15, 20, 30, 40, 50.\nWe report ELEGANT results showing best qualitative performance.\nOverall, ELEGANT does not work well for holistic image manipulation (though works well for local image edits, per experiments by authors (Xiao et al., 2018)).\nB.4 STARGAN (CHOI ET AL., 2018)\nWe utilize the author’s opensourced code: https://github.com/yunjey/stargan. Unlike ELEGANT (Xiao et al., 2018) and our method, starGAN only accepts one input image and an edit information: the edit information, is not extracted from another image – this is following their method and published code."
},
{
"heading": "C ZEROSHOT SYNTHESIS PERFORMANCE ON DSPRITES DATASET",
"text": "We qualitatively evaluate our method, GroupSupervised ZeroShot Synthesis Network (GZSNet), against three baseline methods, on zeroshot synthesis tasks on the dSprites dataset.\nC.1 DSPRITES\ndSprites (Matthey et al., 2017) is a dataset of 2D shapes procedurally generated from 6 ground truth independent latent factors. These factors are color, shape, scale, rotation, x and ypositions of a sprite. All possible combinations of these latents are present exactly once, generating 737280 total images. Latent factor values (Color: white; Shape: square, ellipse, heart; Scale: 6 values linearly\nspaced in [0.5, 1]; Orientation: 40 values in [0, 2 pi]; Position X: 32 values in [0, 1]; Position Y: 32 values in [0, 1])\nC.2 EXPERIMENTS OF BASELINES AND GZSNET\nWe train a 10dimensional latent space and partition the it equally among the 5 attributes: 2 for shape, 2 for scale, 2 for orientation, 2 for position X , and 2 for position Y . We use a train:test split of 75:25.\nWe train 3 baselines: a standard Autoencoder, a βVAE (Higgins et al., 2017), and TCVAE (Chen et al., 2018). To recover the latenttoattribute assignment for these baselines, we utilize the Exhaustive Search besteffort strategy, described in the main paper: the only difference is that we change the dimension of Z space from 100 to 10. Once assignments are known, we utilize these baseline VAEs by attribute swapping to do controlled synthesis. We denote these baselines using suffix +ES.\nAs is shown in Figure 2, GZSNet can precisely synthesize zeroshot images with new combinations of attributes, producing images similar to the groud truth. The baselines βVAE and TCVAE produce realistic images of good visual quality, however, not satisfying the requested query: therefore, they cannot do controllable synthesis even when equipped with our besteffort Exhaustive Search to discover the disentanglement. Standard autoencoders can not synthesis meaningful images when combining latents from different examples, giving images outside the distribution of training samples (e.g. showing multiple sprites per image)."
}
]
 2,021
 ZEROSHOT SYNTHESIS WITH GROUPSUPERVISED LEARNING

SP:9f70871f0111b58783f731748d8750c635998f32
 [
"This paper presents an approach to learn goal conditioned policies by relying on selfplay which sets the goals and discovers a curriculum of tasks for learning. Alice and Bob are the agents. Alice's task is to set a goal by following a number of steps in the environment and she is rewarded when the goal is too challenging for Bob to solve. Bob's task is to solve the task by trying to reproduce the end state of Alice's demonstration. As a result, the learned policy performs various tasks and can work in zeroshot settings."
]
 We train a single, goalconditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. We rely on asymmetric selfplay for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob aims to solve them. We show that this method can discover highly diverse and complex goals without any human priors. Bob can be trained with only sparse rewards, because the interaction between Alice and Bob results in a natural curriculum and Bob can learn from Alice’s trajectory when relabeled as a goalconditioned demonstration. Finally, our method scales, resulting in a single policy that can generalize to many unseen tasks such as setting a table, stacking blocks, and solving simple puzzles. Videos of a learned policy is available at https://roboticsselfplay.github.io.
 []
 [
{
"authors": [
"Marcin Andrychowicz",
"Filip Wolski",
"Alex Ray",
"Jonas Schneider",
"Rachel Fong",
"Peter Welinder",
"Bob McGrew",
"Josh Tobin",
"Pieter Abbeel",
"Wojciech Zaremba"
],
"title": "Hindsight experience replay",
"venue": "In Advances in neural information processing systems,",
"year": 2017
},
{
"authors": [
"Trapit Bansal",
"Jakub Pachocki",
"Szymon Sidor",
"Ilya Sutskever",
"Igor Mordatch"
],
"title": "Emergent complexity via multiagent competition",
"venue": "In International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Adrien Baranes",
"PierreYves Oudeyer"
],
"title": "Active learning of inverse models with intrinsically motivated goal exploration in robots",
"venue": "Robotics and Autonomous Systems,",
"year": 2013
},
{
"authors": [
"Yuri Burda",
"Harrison Edwards",
"Amos Storkey",
"Oleg Klimov"
],
"title": "Exploration by random network distillation",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Angel X Chang",
"Thomas Funkhouser",
"Leonidas Guibas",
"Pat Hanrahan",
"Qixing Huang",
"Zimo Li",
"Silvio Savarese",
"Manolis Savva",
"Shuran Song",
"Hao Su"
],
"title": "Shapenet: An informationrich 3d model repository",
"venue": "arXiv preprint arXiv:1512.03012,",
"year": 2015
},
{
"authors": [
"Marc Peter Deisenroth",
"Carl Edward Rasmussen",
"Dieter Fox"
],
"title": "Learning to control a lowcost manipulator using dataefficient reinforcement learning. Robotics: Science and Systems VII, pp",
"venue": "arXiv preprint arXiv:1611.02779,",
"year": 2011
},
{
"authors": [
"Yan Duan",
"Marcin Andrychowicz",
"Bradly Stadie",
"OpenAI Jonathan Ho",
"Jonas Schneider",
"Ilya Sutskever",
"Pieter Abbeel",
"Wojciech Zaremba"
],
"title": "Oneshot imitation learning",
"venue": "In Advances in neural information processing systems,",
"year": 2017
},
{
"authors": [
"Adrien Ecoffet",
"Joost Huizinga",
"Joel Lehman",
"Kenneth O Stanley",
"Jeff Clune"
],
"title": "Goexplore: a new approach for hardexploration problems",
"venue": null,
"year": 1901
},
{
"authors": [
"Adrien Ecoffet",
"Joost Huizinga",
"Joel Lehman",
"Kenneth O Stanley",
"Jeff Clune"
],
"title": "First return then explore",
"venue": "arXiv preprint arXiv:2004.12919,",
"year": 2020
},
{
"authors": [
"Lasse Espeholt",
"Hubert Soyer",
"Remi Munos",
"Karen Simonyan",
"Vlad Mnih",
"Tom Ward",
"Yotam Doron",
"Vlad Firoiu",
"Tim Harley",
"Iain Dunning"
],
"title": "Impala: Scalable distributed deeprl with importance weighted actorlearner architectures",
"venue": "In International Conference on Machine Learning,",
"year": 2018
},
{
"authors": [
"Carlos Florensa",
"David Held",
"Markus Wulfmeier",
"Michael Zhang",
"Pieter Abbeel"
],
"title": "Reverse curriculum generation for reinforcement learning",
"venue": "In Conference on Robot Learning,",
"year": 2017
},
{
"authors": [
"Carlos Florensa",
"David Held",
"Xinyang Geng",
"Pieter Abbeel"
],
"title": "Automatic goal generation for reinforcement learning agents",
"venue": "In International conference on machine learning,",
"year": 2018
},
{
"authors": [
"Shixiang Gu",
"Ethan Holly",
"Timothy Lillicrap",
"Sergey Levine"
],
"title": "Deep reinforcement learning for robotic manipulation with asynchronous offpolicy updates",
"venue": "In IEEE international conference on robotics and automation (ICRA),",
"year": 2017
},
{
"authors": [
"Abhishek Gupta",
"Vikash Kumar",
"Corey Lynch",
"Sergey Levine",
"Karol Hausman"
],
"title": "Relay policy learning: Solving longhorizon tasks via imitation and reinforcement learning",
"venue": "In Conference on Robot Learning,",
"year": 2020
},
{
"authors": [
"Karol Hausman",
"Jost Tobias Springenberg",
"Ziyu Wang",
"Nicolas Heess",
"Martin Riedmiller"
],
"title": "Learning an embedding space for transferable robot skills",
"venue": "In International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Jemin Hwangbo",
"Joonho Lee",
"Alexey Dosovitskiy",
"Dario Bellicoso",
"Vassilios Tsounis",
"Vladlen Koltun",
"Marco Hutter"
],
"title": "Learning agile and dynamic motor skills for legged robots",
"venue": "Science Robotics,",
"year": 2019
},
{
"authors": [
"Stephen James",
"Paul Wohlhart",
"Mrinal Kalakrishnan",
"Dmitry Kalashnikov",
"Alex Irpan",
"Julian Ibarz",
"Sergey Levine",
"Raia Hadsell",
"Konstantinos Bousmalis"
],
"title": "Simtoreal via simtosim: Dataefficient robotic grasping via randomizedtocanonical adaptation networks",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2019
},
{
"authors": [
"Leslie Pack Kaelbling"
],
"title": "Learning to achieve goals",
"venue": "In Proceedings of the 13th International Joint Conference on Artificial Intelligence,",
"year": 1993
},
{
"authors": [
"Diederik P Kingma",
"Jimmy Ba"
],
"title": "Adam: A method for stochastic optimization",
"venue": "arXiv preprint arXiv:1412.6980,",
"year": 2014
},
{
"authors": [
"Sergey Levine",
"Chelsea Finn",
"Trevor Darrell",
"Pieter Abbeel"
],
"title": "Endtoend training of deep visuomotor policies",
"venue": "The Journal of Machine Learning Research,",
"year": 2016
},
{
"authors": [
"Andrew Levy",
"George Konidaris",
"Robert Platt",
"Kate Saenko"
],
"title": "Learning multilevel hierarchies with hindsight",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Richard Li",
"Allan Jabri",
"Trevor Darrell",
"Pulkit Agrawal"
],
"title": "Towards practical multiobject manipulation using relational reinforcement learning",
"venue": "arXiv preprint arXiv:1912.11032,",
"year": 2019
},
{
"authors": [
"Hao Liu",
"Alexander Trott",
"Richard Socher",
"Caiming Xiong"
],
"title": "Competitive experience replay",
"venue": "In International Conference on Learning Representations,",
"year": 2019
},
{
"authors": [
"Corey Lynch",
"Mohi Khansari",
"Ted Xiao",
"Vikash Kumar",
"Jonathan Tompson",
"Sergey Levine",
"Pierre Sermanet"
],
"title": "Learning latent plans from play",
"venue": "In Conference on Robot Learning,",
"year": 2020
},
{
"authors": [
"Tambet Matiisen",
"Avital Oliver",
"Taco Cohen",
"John Schulman"
],
"title": "Teacherstudent curriculum learning",
"venue": "IEEE transactions on neural networks and learning systems,",
"year": 2019
},
{
"authors": [
"Ofir Nachum",
"Shixiang Gu",
"Honglak Lee",
"Sergey Levine"
],
"title": "Dataefficient hierarchical reinforcement learning",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"Ashvin Nair",
"Bob McGrew",
"Marcin Andrychowicz",
"Wojciech Zaremba",
"Pieter Abbeel"
],
"title": "Overcoming exploration in reinforcement learning with demonstrations",
"venue": "In IEEE International Conference on Robotics and Automation (ICRA),",
"year": 2018
},
{
"authors": [
"Ashvin V Nair",
"Vitchyr Pong",
"Murtaza Dalal",
"Shikhar Bahl",
"Steven Lin",
"Sergey Levine"
],
"title": "Visual reinforcement learning with imagined goals",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"OpenAI",
"Ilge Akkaya",
"Marcin Andrychowicz",
"Maciek Chociej",
"Mateusz Litwin",
"Bob McGrew",
"Arthur Petron",
"Alex Paino",
"Matthias Plappert",
"Glenn Powell",
"Raphael Ribas",
"Jonas Schneider",
"Nikolas Tezak",
"Jerry Tworek",
"Peter Welinder",
"Lilian Weng",
"Qiming Yuan",
"Wojciech Zaremba",
"Lei Zhang"
],
"title": "Solving rubik’s cube with a robot hand",
"venue": "arXiv preprint arXiv:1910.07113,",
"year": 2019
},
{
"authors": [
"OpenAI",
"Marcin Andrychowicz",
"Bowen Baker",
"Maciek Chociej",
"Rafal Jozefowicz",
"Bob McGrew",
"Jakub Pachocki",
"Arthur Petron",
"Matthias Plappert",
"Glenn Powell",
"Alex Ray",
"Jonas Schneider",
"Szymon Sidor",
"Josh Tobin",
"Peter Welinder",
"Lilian Weng",
"Wojciech Zaremba"
],
"title": "Learning dexterous inhand manipulation",
"venue": "The International Journal of Robotics Research,",
"year": 2020
},
{
"authors": [
"PierreYves Oudeyer",
"Frdric Kaplan",
"Verena V Hafner"
],
"title": "Intrinsic motivation systems for autonomous mental development",
"venue": "IEEE transactions on evolutionary computation,",
"year": 2007
},
{
"authors": [
"Deepak Pathak",
"Pulkit Agrawal",
"Alexei A Efros",
"Trevor Darrell"
],
"title": "Curiositydriven exploration by selfsupervised prediction",
"venue": "In Proceedings of the 34th International Conference on Machine LearningVolume",
"year": 2017
},
{
"authors": [
"Ivaylo Popov",
"Nicolas Heess",
"Timothy Lillicrap",
"Roland Hafner",
"Gabriel BarthMaron",
"Matej Vecerik",
"Thomas Lampe",
"Yuval Tassa",
"Tom Erez",
"Martin Riedmiller"
],
"title": "Dataefficient deep reinforcement learning for dexterous manipulation",
"venue": "arXiv preprint arXiv:1704.03073,",
"year": 2017
},
{
"authors": [
"Sebastien Racaniere",
"Andrew Lampinen",
"Adam Santoro",
"David Reichert",
"Vlad Firoiu",
"Timothy Lillicrap"
],
"title": "Automated curriculum generation through settersolver interactions",
"venue": "In International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"Martin Riedmiller",
"Roland Hafner",
"Thomas Lampe",
"Michael Neunert",
"Jonas Degrave",
"Tom Van de Wiele",
"Volodymyr Mnih",
"Nicolas Heess",
"Jost Tobias Springenberg"
],
"title": "Learning by playing solving sparse reward tasks from scratch, 2018",
"venue": null,
"year": 2018
},
{
"authors": [
"Andrei A Rusu",
"Matej Večerík",
"Thomas Rothörl",
"Nicolas Heess",
"Razvan Pascanu",
"Raia Hadsell"
],
"title": "Simtoreal robot learning from pixels with progressive nets",
"venue": "In Conference on Robot Learning,",
"year": 2017
},
{
"authors": [
"Fereshteh Sadeghi",
"Sergey Levine"
],
"title": "CAD2RL: real singleimage flight without a single real image",
"venue": "In Robotics: Science and Systems XIII, Massachusetts Institute of Technology,",
"year": 2017
},
{
"authors": [
"Fereshteh Sadeghi",
"Sergey Levine"
],
"title": "Cad2rl: Real singleimage flight without a single real image",
"venue": "In Proceedings of Robotics: Science and Systems,",
"year": 2017
},
{
"authors": [
"Tim Salimans",
"Richard Chen"
],
"title": "Learning montezuma’s revenge from a single demonstration",
"venue": "arXiv preprint arXiv:1812.03381,",
"year": 2018
},
{
"authors": [
"John Schulman",
"Filip Wolski",
"Prafulla Dhariwal",
"Alec Radford",
"Oleg Klimov"
],
"title": "Proximal policy optimization algorithms",
"venue": "arXiv preprint arXiv:1707.06347,",
"year": 2017
},
{
"authors": [
"David Silver",
"Aja Huang",
"Chris J Maddison",
"Arthur Guez",
"Laurent Sifre",
"George Van Den Driessche",
"Julian Schrittwieser",
"Ioannis Antonoglou",
"Veda Panneershelvam",
"Marc Lanctot"
],
"title": "Mastering the game of go with deep neural networks and tree",
"venue": "search. nature,",
"year": 2016
},
{
"authors": [
"David Silver",
"Thomas Hubert",
"Julian Schrittwieser",
"Ioannis Antonoglou",
"Matthew Lai",
"Arthur Guez",
"Marc Lanctot",
"Laurent Sifre",
"Dharshan Kumaran",
"Thore Graepel"
],
"title": "Mastering chess and shogi by selfplay with a general reinforcement learning algorithm",
"venue": "arXiv preprint arXiv:1712.01815,",
"year": 2017
},
{
"authors": [
"Rupesh Kumar Srivastava",
"Bas R Steunebrink",
"Jürgen Schmidhuber"
],
"title": "First experiments with powerplay",
"venue": "Neural Networks,",
"year": 2013
},
{
"authors": [
"Sainbayar Sukhbaatar",
"Emily Denton",
"Arthur Szlam",
"Rob Fergus"
],
"title": "Learning goal embeddings via selfplay for hierarchical reinforcement learning",
"venue": "arXiv preprint arXiv:1811.09083,",
"year": 2018
},
{
"authors": [
"Sainbayar Sukhbaatar",
"Zeming Lin",
"Ilya Kostrikov",
"Gabriel Synnaeve",
"Arthur Szlam",
"Rob Fergus"
],
"title": "Intrinsic motivation and automatic curricula via asymmetric selfplay",
"venue": "In International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Gerald Tesauro"
],
"title": "Temporal difference learning and tdgammon",
"venue": "Communications of the ACM,",
"year": 1995
},
{
"authors": [
"Josh Tobin",
"Rachel Fong",
"Alex Ray",
"Jonas Schneider",
"Wojciech Zaremba",
"Pieter Abbeel"
],
"title": "Domain randomization for transferring deep neural networks from simulation to the real world",
"venue": "In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),",
"year": 2017
},
{
"authors": [
"Emanuel Todorov",
"Tom Erez",
"Yuval Tassa"
],
"title": "MuJoCo: A physics engine for modelbased control",
"venue": "In Intelligent Robots and Systems (IROS),",
"year": 2012
},
{
"authors": [
"Mel Vecerik",
"Todd Hester",
"Jonathan Scholz",
"Fumin Wang",
"Olivier Pietquin",
"Bilal Piot",
"Nicolas Heess",
"Thomas Rothörl",
"Thomas Lampe",
"Martin Riedmiller"
],
"title": "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards, 2018",
"venue": null,
"year": 2018
},
{
"authors": [
"Alexander Sasha Vezhnevets",
"Simon Osindero",
"Tom Schaul",
"Nicolas Heess",
"Max Jaderberg",
"David Silver",
"Koray Kavukcuoglu"
],
"title": "Feudal networks for hierarchical reinforcement learning",
"venue": "In International Conference on Machine Learning,",
"year": 2017
},
{
"authors": [
"Oriol Vinyals",
"Igor Babuschkin",
"Wojciech M Czarnecki",
"Michaël Mathieu",
"Andrew Dudzik",
"Junyoung Chung",
"David H Choi",
"Richard Powell",
"Timo Ewalds",
"Petko Georgiev"
],
"title": "Grandmaster level in starcraft ii using multiagent reinforcement learning",
"venue": null,
"year": 2019
},
{
"authors": [
"Jane X Wang",
"Zeb KurthNelson",
"Dhruva Tirumala",
"Hubert Soyer",
"Joel Z Leibo",
"Remi Munos",
"Charles Blundell",
"Dharshan Kumaran",
"Matt Botvinick"
],
"title": "Learning to reinforcement learn",
"venue": "arXiv preprint arXiv:1611.05763,",
"year": 2016
},
{
"authors": [
"Rui Wang",
"Joel Lehman",
"Jeff Clune",
"Kenneth O Stanley"
],
"title": "Paired openended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions",
"venue": null,
"year": 1901
},
{
"authors": [
"Rui Wang",
"Joel Lehman",
"Aditya Rawal",
"Jiale Zhi",
"Yulun Li",
"Jeff Clune",
"Kenneth O Stanley"
],
"title": "Enhanced poet: Openended reinforcement learning through unbounded invention of learning challenges and their solutions",
"venue": "arXiv preprint arXiv:2003.08536,",
"year": 2020
},
{
"authors": [
"Tianhe Yu",
"Deirdre Quillen",
"Zhanpeng He",
"Ryan Julian",
"Karol Hausman",
"Chelsea Finn",
"Sergey Levine"
],
"title": "Metaworld: A benchmark and evaluation for multitask and meta reinforcement learning",
"venue": "In Conference on Robot Learning,",
"year": 2020
},
{
"authors": [
"Yunzhi Zhang",
"Pieter Abbeel",
"Lerrel Pinto"
],
"title": "Automatic curriculum learning through value disagreement",
"venue": "arXiv preprint arXiv:2006.09641,",
"year": 2020
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "We are motivated to train a single goalconditioned policy (Kaelbling, 1993) that can solve any robotic manipulation task that a human may request in a given environment. In this work, we make progress towards this goal by solving a robotic manipulation problem in a tabletop setting where the robot’s task is to change the initial configuration of a variable number of objects on a table to match a given goal configuration. This problem is simple in its formulation but likely to challenge a wide variety of cognitive abilities of a robot as objects become diverse and goals become complex.\nMotivated by the recent success of deep reinforcement learning for robotics (Levine et al., 2016; Gu et al., 2017; Hwangbo et al., 2019; OpenAI et al., 2019a), we tackle this problem using deep reinforcement learning on a very large training distribution. An open question in this approach is how we can build a training distribution rich enough to achieve generalization to many unseen manipulation tasks. This involves defining both an environment’s initial state distribution and a goal distribution. The initial state distribution determines how we sample a set of objects and their configuration at the beginning of an episode, and the goal distribution defines how we sample target states given an initial state. In this work, we focus on a scalable way to define a rich goal distribution.\nThe research community has started to explore automated ways of defining goal distributions. For example, previous works have explored learning a generative model of goal distributions (Florensa et al., 2018; Nair et al., 2018b; Racaniere et al., 2020) and collecting teleoperated robot trajectories\nto identify goals (Lynch et al., 2020; Gupta et al., 2020). In this paper, we extend an alternative approach called asymmetric selfplay (Sukhbaatar et al., 2018b;a) for automated goal generation. Asymmetric selfplay trains two RL agents named Alice and Bob. Alice learns to propose goals that Bob is likely to fail at, and Bob, a goalconditioned policy, learns to solve the proposed goals. Alice proposes a goal by manipulating objects and Bob has to solve the goal starting from the same initial state as Alice’s. By embodying these two agents into the same robotic hardware, this setup ensures that all proposed goals are provided with at least one solution: Alice’s trajectory.\nThere are two main reasons why we consider asymmetric selfplay to be a promising goal generation and learning method. First, any proposed goal is achievable, meaning that there exists at least one solution trajectory that Bob can follow to achieve the goal. Because of this property, we can exploit Alice’s trajectory to provide additional learning signal to Bob via behavioral cloning. This additional learning signal alleviates the overhead of heuristically designing a curriculum or reward shaping for learning. Second, this approach does not require labor intensive data collection.\nIn this paper, we show that asymmetric selfplay can be used to train a goalconditioned policy for complex object manipulation tasks, and the learned policy can zeroshot generalize to many manually designed holdout tasks, which consist of either previously unseen goals, previously unseen objects, or both. To the best of our knowledge, this is the first work that presents zeroshot generalization to many previously unseen tasks by training purely with asymmetric selfplay.1"
},
{
"heading": "2 PROBLEM FORMULATION",
"text": "Our training environment for robotic manipulation consists of a robot arm with a gripper attached and a wide range of objects placed on a table surface (Figure 1a,1b). The goalconditioned policy learns to control the robot to rearrange randomly placed objects (the initial state) into a specified goal configuration (Figure 1c). We aim to train a policy on a single training distribution and to evaluate its performance over a suite of holdout tasks which are independently designed and not explicitly present during training (Figure 2a). In this work, we construct the training distribution via asymmetric selfplay (Figure 2b) to achieve generalization to many unseen holdout tasks (Figure 1c).\nMathematical formulation Formally, we model the interaction between an environment and a goalconditioned policy as a goalaugmented Markov decision process M = hS,A,P,R,Gi, where S is the state space, A is the action space, P : S ⇥ A ⇥ S 7! R denotes the transition probability, G ✓ S specifies the goal space and R : S ⇥ G 7! R is a goalspecific reward function. A goalaugmented trajectory sequence is {(s0, g, a0, r0), . . . , (st, g, at, rt)}, where the goal is provided to the policy as part of the observation at every step. We say a goal is achieved if st is sufficiently close to g (Appendix A.2). With a slightly overloaded notation, we define the goal distribution G(gs0) as the probability of a goal state g 2 G conditioned on an initial state s0 2 S .\n1Asymmetric selfplay is proposed in Sukhbaatar et al. (2018b;a), but to supplement training while the majority of training is conducted on target tasks. Zeroshot generalization to unseen tasks was not evaluated.\nTraining goal distribution A naive design of the goal distribution G(gs0) is to randomly place objects uniformly on the table, but it is unlikely to generate interesting goals, such as an object picked up and held above the table surface by a robot gripper. Another possible approach, collecting tasks and goals manually, is expensive and hard to scale. We instead sidestep these issues and automatically generate goals via training based on asymmetric selfplay (Sukhbaatar et al., 2018b;a). Asymmetric selfplay involves using a policy named Alice ⇡A(as) to set goals and a goalconditioned policy Bob ⇡B(as, g) to solve goals proposed by Alice, as illustrated in Figure 2b. We run ⇡A to generate a trajectory ⌧A = {(s0, a0, r0), . . . , (sT , aT , rT )} and the last state is labelled as a goal g for ⇡B to solve. The goal distribution G(sT = gs0) is fully determined by ⇡A and we train Bob only on this goal distribution. We therefore say zeroshot generalization when Bob generalizes to a holdout task which is not explicitly encoded into the training distribution.\nEvaluation on holdout tasks To assess zeroshot generalization of ⇡B(as, g) from our training setup, we handdesigned a suite of holdout tasks with goals that are never directly incorporated into the training distribution. Some holdout tasks also feature previously unseen objects. The holdout tasks are designed to either test whether a specific skill has been learned, such as the ability to pick up objects (Figure 3), or represent a semantically interesting task, such as setting a table (Figure 1c). Appendix B.6 describes the list of holdout tasks that we use in our experiments. Note that none of the holdout tasks are used for training ⇡B(as, g)."
},
{
"heading": "3 ASYMMETRIC SELFPLAY",
"text": "To train Alice policy ⇡A(as) and Bob policy ⇡B(as, g), we run the following multigoal game within one episode, as illustrated in Figure 2b:\n1. An initial state s0 is sampled from an initial state distribution. Alice and Bob are instantiated into their own copies of the environment. Alice and Bob alternate turns as follows.\n2. Alice’s turn. Alice interacts with its environment for a fixed number of T steps and may rearrange the objects. The state at the end of Alice’s turn sT will be used as a goal g for Bob. If the proposed goal is invalid (e.g. if Alice has not moved any objects, or if an object has fallen off the table), the episode terminates.\n3. Bob’s turn. Bob receives reward if it successfully achieves the goal g in its environment. Bob’s turn ends when it succeeds at achieving the goal or reaches a timeout. If Bob’s turn ends in a failure, its remaining turns are skipped and treated as failures, while we let Alice to keep generating goals.\n4. Alice receives reward if Bob fails to solve the goal that Alice proposed. Steps 2–3 are repeated until 5 goals are set by Alice or Alice proposes an invalid goal, and then the episode terminates.\nThe competition created by this game encourages Alice to propose goals that are increasingly challenging to Bob, while Bob is forced to solve increasingly complex goals. The multigoal setup was chosen to allow Bob to take advantage of environmental information discovered earlier in the episode to solve its remaining goals, which OpenAI et al. (2019a) found to be important for transfer to physical systems. Note however that in this work we focus on solving goals in simulation only. To improve stability and avoid forgetting, we have Alice and Bob play against past versions of their respective opponent in 20% of games. More details about the game structure and pseudocode for training with asymmetric selfplay are available in Appendix A."
},
{
"heading": "3.1 REWARD STRUCTURE",
"text": "For Bob, we assign sparse goalconditioned rewards. We measure the positional and rotational distance between an object and its goal state as the Euclidean distance and the Euler angle rotational distance, respectively. Whenever both distance metrics are below a small error (the success threshold), this object is deemed to be placed close enough to the goal state and Bob receives 1 reward immediately. But if this object is moved away from the goal state that it has arrived at in past steps, Bob obtains 1 reward such that the sum of perobject reward is at most 1 during a given turn. When all of the objects are in their goal state, Bob receives 5 additional reward and its turn is over.\nFor Alice, we assign a reward after Bob has attempted to solve the goal: 5 reward if Bob failed at solving the goal, and 0 if Bob succeeded. We shape Alice’s reward slightly by adding 1 reward if it has set a valid goal, defined to be when no object has fallen off the table and any object has been moved more than the success threshold. An additional penalty of 3 reward is introduced when Alice sets a goal with objects outside of the placement area, defined to be a fixed 3D volume within the view of the robot’s camera. More details are discussed in Appendix A.2."
},
{
"heading": "3.2 ALICE BEHAVIORAL CLONING (ABC)",
"text": "One of the main benefits of using asymmetric selfplay is that the generated goals come with at least one solution to achieve it: Alice’s trajectory. Similarly to Sukhbaatar et al. (2018a), we exploit this property by training Bob with Behavioral Cloning (BC) from Alice’s trajectory, in addition to the reinforcement learning (RL) objective. We call this learning mechanism Alice Behavioral Cloning (ABC). We propose several improvements over the original formulation in Sukhbaatar et al. (2018a).\nDemonstration trajectory filtering Compared to BC from expert demonstrations, using Alice’s trajectory needs extra care. Alice’s trajectory is likely to be suboptimal for solving the goal, as Alice might arrive at the final state merely by accident. Therefore, we only consider trajectories with goals that Bob failed to solve as demonstrations, to avoid distracting Bob with suboptimal examples. Whenever Bob fails, we relabel Alice’s trajectory ⌧A to be a goalaugmented version ⌧BC = {(s0, sT , a0, r0), . . . , (sT , sT , aT , rT )} as a demonstration for BC, where sT is the goal.\nPPOstyle BC loss clipping The objective for training Bob is L = LRL + Labc, where LRL is an RL objective and Labc is the ABC loss. is a hyperparameter controlling the relative importance of the BC loss. We set = 0.5 throughout the whole experiment. A naive BC loss is to minimize the negative loglikelihood of demonstrated actions, E(st,gt,at)2DBC ⇥ log ⇡B(atst, gt; ✓) ⇤ where DBC is a minibatch of demonstration data and ⇡B is parameterized by ✓. We found that overlyaggressive policy changes triggered by BC sometimes led to learning instabilities. To prevent the policy from changing too drastically, we introduce PPOstyle loss clipping (Schulman et al., 2017) on the BC loss by setting the advantage Â = 1 in the clipped surrogate objective:\nLabc = E(st,gt,at)2DBC clip ⇣ ⇡B(atst, gt; ✓) ⇡B(atst, gt; ✓old) , 1 ✏, 1 + ✏ ⌘\nwhere ⇡B(atst, gt; ✓) is Bob’s likelihood on a demonstration based on the parameters that we are optimizing, and ⇡B(atst, gt; ✓old) is the likelihood based on Bob’s behavior policy (at the time of demonstration collection) evaluated on a demonstration. This behavior policy is identical to the policy that we use to collect RL trajectories. By setting Â = 1, this objective optimizes the naive BC loss, but clips the loss whenever ⇡B(atst,gt;✓)⇡B(atst,gt;✓old) is bigger than 1 + ✏, to prevent the policy from changing too much. ✏ is a clipping threshold and we use ✏ = 0.2 in all the experiments."
},
{
"heading": "4 RELATED WORK",
"text": "Training distribution for RL In the context of multitask RL (Beattie et al., 2016; Hausman et al., 2018; Yu et al., 2020), multigoal RL (Kaelbling, 1993; Andrychowicz et al., 2017), and meta RL (Wang et al., 2016; Duan et al., 2016), previous works manually designed a distribution of tasks or goals to see better generalization of a policy to a new task or goal. Domain randomization (Sadeghi & Levine, 2017b; Tobin et al., 2017; OpenAI et al., 2020) manually defines a distribution of simulated environments, but in service of generalizing to the same task in the real world.\nThere are approaches to grow the training distribution automatically (Srivastava et al., 2013). Selfplay (Tesauro, 1995; Silver et al., 2016; 2017; Bansal et al., 2018; OpenAI et al., 2019b; Vinyals et al., 2019) constructs an evergrowing training distribution where multiple agents learn by competing with each other, so that the resulting agent shows strong performance on a single game. OpenAI et al. (2019a) automatically grew a distribution of domain randomization parameters to accomplish better generalization in the task of solving a Rubik’s cube on the physical robot. Wang et al. (2019;\n2020) studied an automated way to keep discovering challenging 2D terrains and locomotion policies that can solve them in a 2D bipedal walking environment.\nWe employ asymmetric selfplay to construct a training distribution for learning a goalconditioned policy and to achieve generalization to unseen tasks. Florensa et al. (2018); Nair et al. (2018b); Racaniere et al. (2020) had the same motivation as ours, but trained a generative model instead of a goal setting policy. Thus, the difficulties of training a generative model were inherited by these methods: difficulty of modeling a high dimensional space and generation of unrealistic samples. Lynch et al. (2020); Gupta et al. (2020) used teleoperation to collect arbitrary robot trajectories, and defined a goal distribution from the states in the collected trajectories. This approach likely requires a large number of robot trajectories for each environment configuration (e.g. various types of objects on a table), and randomization of objects was not studied in this context.\nAsymmetric selfplay Asymmetric selfplay was proposed by Sukhbaatar et al. (2018b) as a way to supplement RL training. Sukhbaatar et al. (2018b) mixed asymmetric selfplay training with standard RL training on the target task and measured the performance on the target task. Sukhbaatar et al. (2018a) used asymmetric selfplay to pretrain a hierarchical policy and evaluated the policy after finetuning it on a target task. Liu et al. (2019) adopted selfplay to encourage efficient learning with sparse reward in the context of an exploration competition between a pair of agents. As far as we know, no previous work has trained a goalconditioned policy purely based on asymmetric selfplay and evaluated generalization to unseen holdout tasks.\nCurriculum learning Many previous works showed the difficulty of RL and proposed an automated curriculum (Andrychowicz et al., 2017; Florensa et al., 2017; Salimans & Chen, 2018; Matiisen et al., 2019; Zhang et al., 2020) or auxiliary exploration objectives (Oudeyer et al., 2007; Baranes & Oudeyer, 2013; Pathak et al., 2017; Burda et al., 2019; Ecoffet et al., 2019; 2020) to learn predefined tasks. When training goalconditioned policies, relabeling or reversing trajectories (Andrychowicz et al., 2017; Florensa et al., 2017; Salimans & Chen, 2018) or imitating successful demonstrations (Oh et al., 2018; Ecoffet et al., 2019; 2020) naturally reduces the task complexity. Our work shares a similarity in that asymmetric selfplay alleviates the difficulty of learning a goalconditioned policy via an intrinsic curriculum and imitation from the goal setter’s trajectory, but our work does not assume any predefined task or goal distribution.\nHierarchical reinforcement learning (HRL) Some HRL methods jointly trained a goal setting policy (highlevel or manager policy) and a goal solving policy (lowlevel or worker policy) (Vezhnevets et al., 2017; Levy et al., 2019; Nachum et al., 2018). However, the motivation for learning a goal setting policy in HRL is not to challenge the goal solving policy, but to cooperate to tackle a task that can be decomposed into a sequence of subgoals. Hence, this goal setting policy is trained to optimize task reward for the target task, unlike asymmetric selfplay where the goal setter is rewarded upon the other agent’s failure.\nRobot learning for object manipulation. It has been reported that training a policy for multiobject manipulation is very challenging with sparse rewards (Riedmiller et al., 2018; Vecerik et al., 2018). One example is block stacking, which has been studied for a long time in robotics as it involves complex contact reasoning and long horizon motion planning (Deisenroth et al., 2011). Learning block stacking often requires a handdesigned curriculum (Li et al., 2019), meticulous reward shaping (Popov et al., 2017), finetuning (Rusu et al., 2017), or human demonstrations (Nair et al., 2018a; Duan et al., 2017). In this work, we use block stacking as one of the holdout tasks to test zeroshot generalization, but without training on it."
},
{
"heading": "5 EXPERIMENTS",
"text": "In this section, we first show that asymmetric selfplay generates an effective training curriculum that enables generalization to unseen holdout tasks. Then, the experiment is scaled up to train in an environment containing multiple random complex objects and evaluate it with a set of holdout tasks containing unseen objects and unseen goal configurations. Finally, we demonstrate how critical ABC is for Bob to make progress in a set of ablation studies.\nFigure 3: Holdout tasks in the environment using 1 or 2 blocks. The transparent blocks denote the desired goal state, while opaque blocks are the current state. (a) push: The blocks must be moved to their goal locations and orientations. There is no differentiation between the six block faces. (b) flip: Each side of the block is labelled with a unique letter. The blocks must be moved to make every face correctly positioned as what the goal specifies. (c) pickandplace: One goal block is in the air. (d) stack: Two blocks must be stacked in the right order at the right location.\n0 5 10 15 20 25 30\n0\n20\n40\n60\n80\n100\nPush\n0 5 10 15 20 25 30\nFlip\n0 5 10 15 20 25 30\nPickandplace\n0 5 10 15 20 25 30\nStack\nNo curriculum Curriculum: distance Curriculum: distribution Curriculum: full Selfplay Training steps (x100)\nS uc\nce ss\nra te\n(% )\nFigure 4: Generalization to unseen holdout tasks for blocks. Baselines are trained over a mixture of all holdout tasks. The solid lines represent 2blocks, while the dashed lines are for 1block. The xaxis denotes the number of training steps via asymmetric selfplay. The yaxis is the zeroshot generalization performance of Bob policy at corresponding training checkpoints. Note that success rate curves of completely failed baselines are occluded by others."
},
{
"heading": "5.1 EXPERIMENTAL SETUP",
"text": "We implement the training environment2 described in Sec. 2 with randomly placed ShapeNet objects (Chang et al., 2015) as an initial state distribution. In addition, we set up another simpler environment using one or two blocks of fixed size, used for smallscale comparisons and ablation studies. Figure 3 visualizes four holdout tasks for this environment. Each task is designed to evaluate whether the robot has acquired certain manipulation skills: pushing, flipping, picking up and stacking blocks. Experiments in Sec. 5.2, 5.3 and 5.5 focus on blocks and experimental results based on ShapeNet objects are present on Sec. 5.4. More details on our training setups are in Appendix B.\nWe implement Alice and Bob as two independent policies of the same network architecture with memory (Appendix B.4), except that Alice has no observation on goal state. The policies take state observations (“state policy”) for experiments with blocks (Sec. 5.2, 5.3, and 5.5), and take both vision and state observations (“hybrid policy”) for experiments with ShapeNet objects (Sec. 5.4). Both policies are trained with Proximal Policy Optimization (PPO) (Schulman et al., 2017)."
},
{
"heading": "5.2 GENERALIZATION TO UNSEEN GOALS WITHOUT MANUAL CURRICULA",
"text": "One way to train a single policy to acquire all the skills in Figure 3 is to train a goalconditioned policy directly over a mixture of these tasks. However, training directly over these tasks without a curriculum turns out to be very challenging, as the policy completely fails to make any progress.3 In contrast, Bob is able to solve all these holdout tasks quickly when learning via asymmetric selfplay, without explicitly encoding any prior knowledge of the holdout tasks into the training distribution.\nTo gauge the effect of an intrinsic curriculum introduced by selfplay, we carefully designed a set of nonselfplay baselines using explicit curricula controlled by Automatic Domain Randomization (OpenAI et al., 2019a). All baselines are trained over a mixture of block holdout tasks as the\n2Our training and evaluation environments are publicly available at hideforanonymouspurpose 3The tasks was easier when we ignored object rotation as part of the goal, and used a smaller table.\ngoal distribution. We measure the effectiveness of a training setup by tracking the success rate for each holdout task, as shown in Figure 4. The no curriculum baseline fails drastically. The curriculum:distance baseline expands the distance between the initial and goal states gradually as training progresses, but only learns to push and flip a single block. The curriculum:distribution baseline, which slowly increases the proportion of pickandplace and stacking goals in the training distribution, fails to acquire any skill. The curriculum:full baseline incorporates all handdesigned curricula yet still cannot learn how to pick up or stack blocks. We have spent a decent amount of time iterating and improving these baselines but found it especially difficult to develop a scheme good enough to compete with asymmetric selfplay. See Appendix C.1 for more details of our baselines."
},
{
"heading": "5.3 DISCOVERY OF NOVEL GOALS AND SOLUTIONS",
"text": "Asymmetric selfplay discovers novel goals and solutions that are not covered by our holdout tasks. As illustrated in Figure 5, Alice can lift multiple blocks at the same time, build a tower and then keep it balanced using an arm joint. Although it is a tricky strategy for Bob to learn on its own, with ABC, Bob eventually acquires the skills for solving such complex tasks proposed by Alice. Videos are available at https://roboticsselfplay.github.io.\nFigure 6 summarizes Alice and Bob’s learning progress against each other. For every pair of Alice and Bob, we ran multiple selfplay episodes and measured the success rate. We observe an interesting trend with 2 blocks. As training proceeds, Alice tends to generate more challenging goals, where Bob shows lower success rate. With past sampling, Bob continues to make progress against versions of Alices from earlier optimization steps. This visualization suggests a desired dynamic of asymmetric selfplay that could potentially lead to unbounded complexity: Alice continuously generates goals to challenge Bob, and Bob keeps making progress on learning to solve new goals."
},
{
"heading": "5.4 GENERALIZATION TO UNSEEN OBJECTS AND GOALS",
"text": "The experiments above show strong evidence that efficient curricula and novel goals can autonomously emerge in asymmetric selfplay. To further challenge our approach, we scale it up to work with many more complex objects using more computational resources for training. We train a hybrid policy in an environment containing up to 10 random ShapeNet (Chang et al., 2015) objects. During training, we randomize the number of objects and the object sizes via Automatic Domain Randomization (OpenAI et al., 2019a). The hybrid policy uses vision observations to extract information about object geometry and size. We evaluate the Bob policy on a more diverse set of manipulation tasks, including semantically interesting ones. Many tasks contain unseen objects and complex goals, as illustrated in Figure 7.\nThe learned Bob policy achieves decent zeroshot generalization performance for many tasks. Success rates are reported in Figure 8. Several tasks are still challenging. For example, ballcapture\n0\n20\n40\n60\n80\n100\nS uc\nce ss\nra te\n(% )\n100 99 97 96 100 99 100\n66\n0\n98 93 84\n47\n82 64\n28\n94 96 100 99 76\n52\n23 19 9\n1 number of objects3 5 8 1 3 2 3 4 1 3 5 8 1 3 6 4 5 5 7 2 3 4 5 6 Blocks YCB objects Customized objects\nPus h\nPic k&p\nlac e\nSta cki\nng Pus h\nPic k&p\nlac e\nBal l ca\nptu re\nMin i ch\ness\nDom ino\ns Tab le s ett ing Tan gra m Rai nbo w\nFigure 8: Success rates of a single goalconditioned policy solving a variety of holdout tasks, averaged over 100 trials. The error bars indicate the 99% confidence intervals. Yellow, orange and blue bars correspond to success rates of manipulation tasks with blocks, YCB4objects and other uniquely built objects, respectively. Videos are available at https://roboticsselfplay.github.io.\nrequires delicate handling of rolling objects and lifting skills. The rainbow tasks call for an understanding of concave shapes. Understanding the ordering of placement actions is crucial for stacking more than 3 blocks in the desired order. The Bob policy learns such an ordering to some degree, but fails to fully generalize to an arbitrary number of stacked blocks."
},
{
"heading": "5.5 ABLATION STUDIES",
"text": "We present a series of ablation studies designed for measuring the importance of each component in our asymmetric selfplay framework, including Alice behavioral cloning (ABC), BC loss clipping, demonstration filtering, and the multigoal game setup. We disable a single ingredient in each ablation run and compare with the complete selfplay baseline in Figure 9.\nThe no ABC baseline shows that Bob completely fails to solve any holdout task without ABC, indicating that ABC is a critical mechanism in asymmetric selfplay. The no BC loss clipping baseline shows slightly slower learning on pickandplace and stack, as well as some instabilities in the middle of training. Clipping in the BC loss is expected to help alleviate this instability by con\ntrolling the rate of policy change per optimizer iteration. The no demonstration filter baseline shows noticeable instability on flip, suggesting the importance of excluding suboptimal demonstrations from behavioral cloning. Finally, the singlegoal baseline uses a single goal instead of 5 goals per episode during training. The evaluation tasks are also updated to require a single success per episode. Generalization of this baseline to holdout tasks turns out to be much slower and less stable. It signifies some advantages of using multiple goals per episode, perhaps due to the policy memory internalizing environmental information during multiple trials of goal solving.\nThe results of the ablation studies suggest that ABC with proper configuration and multigoal gameplay are critical components of asymmetric selfplay, alleviating the importance of manual curricula and facilitating efficient learning."
},
{
"heading": "6 CONCLUSION",
"text": "One limitation of our asymmetric selfplay approach is that it depends on a resettable simulation environment as Bob needs to start from the same initial state as Alice’s. Therefore asymmetric selfplay training has to happen in a simulator which can be easily updated to a desired state. In order to run the goalsolving policy on physical robots, we plan to adopt simtoreal techniques in future work. Simtoreal has been shown to achieve great performance on many robotic tasks in the real world (Sadeghi & Levine, 2017a; Tobin et al., 2017; James et al., 2019; OpenAI et al., 2020). One potential approach is to pretrain two agents via asymmetric selfplay in simulation, and then finetune the Bob policy with domain randomization or data collected on physical robots.\nIn conclusion, we studied asymmetric selfplay as a framework for defining a single training distribution to learn many arbitrary object manipulation tasks. Even without any prior knowledge about the target tasks, asymmetric selfplay is able to train a strong goalconditioned policy that can generalize to many unseen holdout tasks. We found that asymmetric selfplay not only generates a wide range of interesting goals but also alleviates the necessity of designing manual curricula for learning such goals. We provided evidence that using the goal setting trajectory as a demonstration for training a goal solving policy is essential to enable efficient learning. We further scaled up our approach to work with various complex objects using more computation, and achieved zeroshot generalization to a collection of challenging manipulation tasks involving unseen objects and unseen goals.\n4https://www.ycbbenchmarks.com/objectmodels/"
}
]
 2,020
 
SP:038a1d3066f8273977337262e975d7a7aab5002f
 [
"The paper introduces a theoretical framework for analyzing GNN transferability. The main idea is to view a graph as subgraph samples with the information of both the connections and the features. Based on this view, the authors define EGI score of a graph as a learnable function that needs to be optimized by maximizing the mutual information between the subgraph and the GNN output embedding of the center node. Then, the authors give an upper bound for the difference of EGI scores of two graphs based on the difference of eigenvalues of the graph Laplacian of the subgraph samples from the two graphs. The implication is that if the difference of the eigenvalues is small, then the EGI scores are similar, which means the GNN has a similar ability to encode the structure of the two graphs. ",
"This paper develops a novel measure for assessing the transferability of graph neural network models to new data sets. The measure is based on a decomposition of graphs into 'ego networks' (essentially, a distribution of $k$hop subgraph, extracted from a given larger graph). Transferability is then assessed by means of a spectral criterion using the graph Laplacian. Experiments demonstrate the utility in assessing transferability in such a manner, as the new measure appears to be aligned with improvements in predictive performance.",
"This work aims to provide fundamental understanding towards the mechanism and transferability of GNNs, and develops an unsupervised GNN training objective based on their understanding. Novel theoretical analysis has been done to support the design of EGI and establish its transferability bound, while the effectiveness of EGI and the utility of the transferability bound are verified by extensive experiments. The whole story looks new, comprehensive and convincing to me."
]
 Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for largescale graphs. Some recent work started to study the pretraining of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards their transferability. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of EGI (EgoGraph Information maximization) to analytically achieve this goal. Secondly, when node features are structurerelevant, we conduct an analysis of EGI transferability regarding the difference between the local graph Laplacians of the source and target graphs. We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two realworld network datasets show consistent results in the analyzed setting of directtransfering, while those on largescale knowledge graphs show promising results in the more practical setting of transfering with finetuning.1
 [
{
"affiliations": [],
"name": "Qi Zhu"
},
{
"affiliations": [],
"name": "Carl Yang"
},
{
"affiliations": [],
"name": "Yidan Xu"
},
{
"affiliations": [],
"name": "Haonan Wang"
},
{
"affiliations": [],
"name": "Chao Zhang"
},
{
"affiliations": [],
"name": "Jiawei Han"
}
]
 [
{
"authors": [
"Réka Albert",
"AlbertLászló Barabási"
],
"title": "Statistical mechanics of complex networks",
"venue": "Reviews of modern physics,",
"year": 2002
},
{
"authors": [
"Sanjeev Arora",
"Elad Hazan",
"Satyen Kale"
],
"title": "Fast algorithms for approximate semidefinite programming using the multiplicative weights update method",
"venue": "In FOCS,",
"year": 2005
},
{
"authors": [
"Jinheon Baek",
"Dong Bok Lee",
"Sung Ju Hwang"
],
"title": "Learning to extrapolate knowledge: Transductive fewshot outofgraph link prediction",
"venue": "Advances in Neural Information Processing Systems,",
"year": 2020
},
{
"authors": [
"Lu Bai",
"Edwin R Hancock"
],
"title": "Fast depthbased subgraph kernels for unattributed graphs",
"venue": "Pattern Recognition,",
"year": 2016
},
{
"authors": [
"AlbertLászló Barabási",
"Réka Albert"
],
"title": "Emergence of scaling in random networks. science",
"venue": null,
"year": 1999
},
{
"authors": [
"Mikhail Belkin",
"Partha Niyogi"
],
"title": "Laplacian eigenmaps and spectral techniques for embedding and clustering",
"venue": "In NIPS,",
"year": 2002
},
{
"authors": [
"Shai BenDavid",
"John Blitzer",
"Koby Crammer",
"Fernando Pereira"
],
"title": "Analysis of representations for domain adaptation",
"venue": "In NIPS,",
"year": 2007
},
{
"authors": [
"Karsten Borgwardt",
"Elisabetta Ghisu",
"Felipe LlinaresLópez",
"Leslie O’Bray",
"Bastian Rieck"
],
"title": "Graph kernels: Stateoftheart and future challenges",
"venue": "arXiv preprint arXiv:2011.03854,",
"year": 2020
},
{
"authors": [
"Joan Bruna",
"Wojciech Zaremba",
"Arthur Szlam",
"Yann LeCun"
],
"title": "Spectral networks and locally connected networks on graphs",
"venue": "In ICLR,",
"year": 2014
},
{
"authors": [
"Jie Chen",
"Tengfei Ma",
"Cao Xiao"
],
"title": "Fastgcn: fast learning with graph convolutional networks via importance sampling",
"venue": "In ICLR,",
"year": 2018
},
{
"authors": [
"Fan RK Chung",
"Fan Chung Graham"
],
"title": "Spectral graph theory",
"venue": "Number 92. American Mathematical Soc.,",
"year": 1997
},
{
"authors": [
"Michaël Defferrard",
"Xavier Bresson",
"Pierre Vandergheynst"
],
"title": "Convolutional neural networks on graphs with fast localized spectral filtering",
"venue": "In NIPS,",
"year": 2016
},
{
"authors": [
"Jacob Devlin",
"MingWei Chang",
"Kenton Lee",
"Kristina Toutanova"
],
"title": "Bert: Pretraining of deep bidirectional transformers for language understanding",
"venue": "In ACL,",
"year": 2019
},
{
"authors": [
"Aditya Grover",
"Jure Leskovec"
],
"title": "node2vec: Scalable feature learning for networks",
"venue": "In KDD,",
"year": 2016
},
{
"authors": [
"Will Hamilton",
"Zhitao Ying",
"Jure Leskovec"
],
"title": "Inductive representation learning on large graphs",
"venue": "In NIPS,",
"year": 2017
},
{
"authors": [
"David K Hammond",
"Pierre Vandergheynst",
"Rémi Gribonval"
],
"title": "Wavelets on graphs via spectral graph theory",
"venue": "ACHA, 30(2):129–150,",
"year": 2011
},
{
"authors": [
"Kaveh Hassani",
"Amir Hosein Khasahmadi"
],
"title": "Contrastive multiview representation learning on graphs",
"venue": "In International Conference on Machine Learning,",
"year": 2020
},
{
"authors": [
"Kaiming He",
"Xiangyu Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"title": "Deep residual learning for image recognition",
"venue": "In CVPR,",
"year": 2016
},
{
"authors": [
"Keith Henderson",
"Brian Gallagher",
"Tina EliassiRad",
"Hanghang Tong",
"Sugato Basu",
"Leman Akoglu",
"Danai Koutra",
"Christos Faloutsos",
"Lei Li"
],
"title": "Rolx: structural role extraction & mining in large graphs",
"venue": "In KDD,",
"year": 2012
},
{
"authors": [
"R Devon Hjelm",
"Alex Fedorov",
"Samuel LavoieMarchildon",
"Karan Grewal",
"Phil Bachman",
"Adam Trischler",
"Yoshua Bengio"
],
"title": "Learning deep representations by mutual information estimation and maximization",
"venue": "In ICLR,",
"year": 2019
},
{
"authors": [
"Weihua Hu",
"Bowen Liu",
"Joseph Gomes",
"Marinka Zitnik",
"Percy Liang",
"Vijay Pande",
"Jure Leskovec"
],
"title": "Strategies for pretraining graph neural networks",
"venue": "In ICLR,",
"year": 2019
},
{
"authors": [
"Ziniu Hu",
"Yuxiao Dong",
"Kuansan Wang",
"KaiWei Chang",
"Yizhou Sun"
],
"title": "Gptgnn: Generative pretraining of graph neural networks",
"venue": "In KDD,",
"year": 2020
},
{
"authors": [
"Ziniu Hu",
"Changjun Fan",
"Ting Chen",
"KaiWei Chang",
"Yizhou Sun"
],
"title": "Pretraining graph neural networks for generic structural feature extraction",
"venue": "arXiv preprint arXiv:1905.13728,",
"year": 1905
},
{
"authors": [
"SukGeun Hwang"
],
"title": "Cauchy’s interlace theorem for eigenvalues of hermitian matrices",
"venue": "The American Mathematical Monthly,",
"year": 2004
},
{
"authors": [
"Xuan Kan",
"Hejie Cui",
"Carl Yang"
],
"title": "Zeroshot scene graph relation prediction through commonsense knowledge integration",
"venue": "In ECMLPKDD,",
"year": 2021
},
{
"authors": [
"Nicolas Keriven",
"Gabriel Peyré"
],
"title": "Universal invariant and equivariant graph neural networks",
"venue": "In NIPS,",
"year": 2019
},
{
"authors": [
"Diederik P Kingma",
"Jimmy Ba"
],
"title": "Adam: A method for stochastic optimization",
"venue": "arXiv preprint arXiv:1412.6980,",
"year": 2014
},
{
"authors": [
"Thomas N Kipf",
"Max Welling"
],
"title": "Variational graph autoencoders",
"venue": "arXiv preprint arXiv:1611.07308,",
"year": 2016
},
{
"authors": [
"Thomas N Kipf",
"Max Welling"
],
"title": "Semisupervised classification with graph convolutional networks",
"venue": "In ICLR,",
"year": 2017
},
{
"authors": [
"Nils M Kriege",
"Fredrik D Johansson",
"Christopher Morris"
],
"title": "A survey on graph kernels",
"venue": "Applied Network Science,",
"year": 2020
},
{
"authors": [
"Lin Lan",
"Pinghui Wang",
"Xuefeng Du",
"Kaikai Song",
"Jing Tao",
"Xiaohong Guan"
],
"title": "Node classification on graphs with fewshot novel labels via meta transformed network embedding",
"venue": "Advances in Neural Information Processing Systems,",
"year": 2020
},
{
"authors": [
"Jure Leskovec",
"Jon Kleinberg",
"Christos Faloutsos"
],
"title": "Graphs over time: densification laws, shrinking diameters and possible explanations",
"venue": "In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining,",
"year": 2005
},
{
"authors": [
"Ron Levie",
"Wei Huang",
"Lorenzo Bucci",
"Michael M Bronstein",
"Gitta Kutyniok"
],
"title": "Transferability of spectral graph convolutional neural networks",
"venue": null,
"year": 1907
},
{
"authors": [
"Ron Levie",
"Elvin Isufi",
"Gitta Kutyniok"
],
"title": "On the transferability of spectral graph filters",
"venue": "13th International conference on Sampling Theory and Applications (SampTA),",
"year": 2019
},
{
"authors": [
"Jenny Liu",
"Aviral Kumar",
"Jimmy Ba",
"Jamie Kiros",
"Kevin Swersky"
],
"title": "Graph normalizing flows",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Miller McPherson",
"Lynn SmithLovin",
"James M Cook"
],
"title": "Birds of a feather: Homophily in social networks",
"venue": "Annual review of sociology,",
"year": 2001
},
{
"authors": [
"Tomas Mikolov",
"Ilya Sutskever",
"Kai Chen",
"Greg Corrado",
"Jeffrey Dean"
],
"title": "Distributed representations of words and phrases and their compositionality",
"venue": "arXiv preprint arXiv:1310.4546,",
"year": 2013
},
{
"authors": [
"Giannis Nikolentzos",
"Giannis Siglidis",
"Michalis Vazirgiannis"
],
"title": "Graph kernels: A survey",
"venue": "arXiv preprint arXiv:1904.12218,",
"year": 2019
},
{
"authors": [
"Kenta Oono",
"Taiji Suzuki"
],
"title": "Graph neural networks exponentially lose expressive power for node classification",
"venue": "In ICLR,",
"year": 2020
},
{
"authors": [
"Lawrence Page",
"Sergey Brin",
"Rajeev Motwani",
"Terry Winograd"
],
"title": "The pagerank citation ranking: Bringing order to the web",
"venue": "Technical report, Stanford InfoLab,",
"year": 1999
},
{
"authors": [
"Zhen Peng",
"Wenbing Huang",
"Minnan Luo",
"Qinghua Zheng",
"Yu Rong",
"Tingyang Xu",
"Junzhou Huang"
],
"title": "Graph representation learning via graphical mutual information maximization",
"venue": "In WWW,",
"year": 2020
},
{
"authors": [
"Bryan Perozzi",
"Rami AlRfou",
"Steven Skiena"
],
"title": "Deepwalk: Online learning of social representations",
"venue": "In KDD,",
"year": 2014
},
{
"authors": [
"Jiezhong Qiu",
"Qibin Chen",
"Yuxiao Dong",
"Jing Zhang",
"Hongxia Yang",
"Ming Ding",
"Kuansan Wang",
"Jie Tang"
],
"title": "Gcc: Graph contrastive coding for graph neural network pretraining",
"venue": "In KDD,",
"year": 2020
},
{
"authors": [
"Sachin Ravi",
"Hugo Larochelle"
],
"title": "Optimization as a model for fewshot learning",
"venue": "In ICLR,",
"year": 2017
},
{
"authors": [
"Leonardo FR Ribeiro",
"Pedro HP Saverese",
"Daniel R Figueiredo"
],
"title": "struc2vec: Learning node representations from structural identity",
"venue": "In KDD,",
"year": 2017
},
{
"authors": [
"Sam T Roweis",
"Lawrence K Saul"
],
"title": "Nonlinear dimensionality reduction by locally linear embedding",
"venue": null,
"year": 2000
},
{
"authors": [
"Luana Ruiz",
"Luiz Chamon",
"Alejandro Ribeiro"
],
"title": "Graphon neural networks and the transferability of graph neural networks",
"venue": "Advances in Neural Information Processing Systems,",
"year": 2020
},
{
"authors": [
"Yu Shi",
"Qi Zhu",
"Fang Guo",
"Chao Zhang",
"Jiawei Han"
],
"title": "Easing embedding learning by comprehensive transcription of heterogeneous information networks",
"venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,",
"year": 2018
},
{
"authors": [
"Fabian M Suchanek",
"Gjergji Kasneci",
"Gerhard Weikum"
],
"title": "Yago: a core of semantic knowledge",
"venue": "In WWW,",
"year": 2007
},
{
"authors": [
"FanYun Sun",
"Jordan Hoffman",
"Vikas Verma",
"Jian Tang"
],
"title": "Infograph: Unsupervised and semisupervised graphlevel representation learning via mutual information maximization",
"venue": "In ICLR,",
"year": 2019
},
{
"authors": [
"Jian Tang",
"Meng Qu",
"Mingzhe Wang",
"Ming Zhang",
"Jun Yan",
"Qiaozhu Mei"
],
"title": "Line: Largescale information network embedding",
"venue": "In WWW,",
"year": 2015
},
{
"authors": [
"Joshua B Tenenbaum",
"Vin De Silva",
"John C Langford"
],
"title": "A global geometric framework for nonlinear dimensionality reduction",
"venue": null,
"year": 2000
},
{
"authors": [
"Petar Velickovic",
"Guillem Cucurull",
"Arantxa Casanova",
"Adriana Romero",
"Pietro Lio",
"Yoshua Bengio"
],
"title": "Graph attention networks",
"venue": "In ICLR,",
"year": 2018
},
{
"authors": [
"Petar Velickovic",
"William Fedus",
"William L Hamilton",
"Pietro Lio",
"Yoshua Bengio",
"R Devon Hjelm"
],
"title": "Deep graph infomax",
"venue": "In ICLR,",
"year": 2019
},
{
"authors": [
"Saurabh Verma",
"ZhiLi Zhang"
],
"title": "Stability and generalization of graph convolutional neural networks",
"venue": "In KDD,",
"year": 2019
},
{
"authors": [
"Oriol Vinyals",
"Charles Blundell",
"Tim Lillicrap",
"Daan Wierstra"
],
"title": "Matching networks for one shot learning",
"venue": "In NIPS,",
"year": 2016
},
{
"authors": [
"Boris Weisfeiler",
"Andrei A Lehman"
],
"title": "A reduction of a graph to a canonical form and an algebra arising during this reduction",
"venue": "NauchnoTechnicheskaya Informatsia,",
"year": 1968
},
{
"authors": [
"Man Wu",
"Shirui Pan",
"Chuan Zhou",
"Xiaojun Chang",
"Xingquan Zhu"
],
"title": "Unsupervised domain adaptive graph convolutional networks",
"venue": "In WWW,",
"year": 2020
},
{
"authors": [
"Keyulu Xu",
"Weihua Hu",
"Jure Leskovec",
"Stefanie Jegelka"
],
"title": "How powerful are graph neural networks",
"venue": "In ICLR,",
"year": 2019
},
{
"authors": [
"Bishan Yang",
"Wentau Yih",
"Xiaodong He",
"Jianfeng Gao",
"Li Deng"
],
"title": "Embedding entities and relations for learning and inference in knowledge bases",
"venue": "arXiv preprint arXiv:1412.6575,",
"year": 2014
},
{
"authors": [
"Carl Yang",
"Yichen Feng",
"Pan Li",
"Yu Shi",
"Jiawei Han"
],
"title": "Metagraph based hin spectral embedding: Methods, analyses, and insights",
"venue": null,
"year": 2018
},
{
"authors": [
"Carl Yang",
"Aditya Pal",
"Andrew Zhai",
"Nikil Pancha",
"Jiawei Han",
"Chuck Rosenberg",
"Jure Leskovec"
],
"title": "Multisage: Empowering graphsage with contextualized multiembedding on webscale multipartite networks",
"venue": "In KDD,",
"year": 2020
},
{
"authors": [
"Carl Yang",
"Yuxin Xiao",
"Yu Zhang",
"Yizhou Sun",
"Jiawei Han"
],
"title": "Heterogeneous network representation learning: A unified framework with survey and benchmark",
"venue": "In TKDE,",
"year": 2020
},
{
"authors": [
"Carl Yang",
"Chao Zhang",
"Xuewen Chen",
"Jieping Ye",
"Jiawei Han"
],
"title": "Did you enjoy the ride? understanding passenger experience via heterogeneous network embedding",
"venue": null,
"year": 2018
},
{
"authors": [
"Carl Yang",
"Jieyu Zhang",
"Jiawei Han"
],
"title": "Coembedding network nodes and hierarchical labels with taxonomy based generative adversarial nets",
"venue": "In ICDM,",
"year": 2020
},
{
"authors": [
"Carl Yang",
"Jieyu Zhang",
"Haonan Wang",
"Sha Li",
"Myungwan Kim",
"Matt Walker",
"Yiou Xiao",
"Jiawei Han"
],
"title": "Relation learning on social networks with multimodal graph edge variational autoencoders",
"venue": "In WSDM,",
"year": 2020
},
{
"authors": [
"Carl Yang",
"Peiye Zhuang",
"Wenhan Shi",
"Alan Luu",
"Pan Li"
],
"title": "Conditional structure generation through graph variational generative adversarial nets",
"venue": "In NIPS,",
"year": 2019
},
{
"authors": [
"Zhitao Ying",
"Jiaxuan You",
"Christopher Morris",
"Xiang Ren",
"Will Hamilton",
"Jure Leskovec"
],
"title": "Hierarchical graph representation learning with differentiable pooling",
"venue": null,
"year": 2018
},
{
"authors": [
"Jiaxuan You",
"Rex Ying",
"Xiang Ren",
"William Hamilton",
"Jure Leskovec"
],
"title": "GraphRNN: Generating realistic graphs with deep autoregressive models",
"venue": "In Proceedings of the 35th International Conference on Machine Learning,",
"year": 2018
}
]
 [
{
"heading": "1 Introduction",
"text": "Graph neural networks (GNNs) have been intensively studied recently [29, 26, 39, 68], due to their established performance towards various realworld tasks [15, 69, 53], as well as close connections to spectral graph theory [12, 9, 16]. While most GNN architectures are not very complicated, the training of GNNs can still be costly regarding both memory and computation resources on realworld largescale graphs [10, 63]. Moreover, it is intriguing to transfer learned structural information across different graphs and even domains in settings like fewshot learning [56, 44, 25]. Therefore, several very recent studies have been conducted on the transferability of GNNs [21, 23, 22, 59, 31, 3, 47]. However, it is unclear in what situations the models will excel or fail especially when the pretraining and finetuning tasks are different. To provide rigorous analysis and guarantee on the transferability of GNNs, we focus on the setting of directtransfering between the source and target graphs, under an analogous setting of “domain adaptation” [7, 59].\nIn this work, we establish a theoretically grounded framework for the transfer learning of GNNs, and leverage it to design a practically transferable GNN model. Figure 1 gives an overview of our framework. It is based on a novel view of a graph as samples from the joint distribution of its khop egograph structures and node features, which allows us to define graph information and similarity,\n∗These two authors contribute equally. 1Code and processed data are available at https://github.com/GentleZhu/EGI.\n35th Conference on Neural Information Processing Systems (NeurIPS 2021), Online.\nso as to analyze GNN transferability (§3). This view motivates us to design EGI, a novel GNN training objective based on egograph information maximization, which is effective in capturing the graph information as we define (§3.1). Then we further specify the requirement on transferable node features and analyze the transferability of EGI that is dependent on the local graph Laplacians of source and target graphs (§3.2).\nAll of our theoretical conclusions have been directly validated through controlled synthetic experiments (Table 1), where we use structuralequivalent role identification in an directtransfering setting to analyze the impacts of different model designs, node features and sourcetarget structure similarities on GNN transferability. In §4, we conduct realworld experiments on multiple publicly available network datasets. On the Airport and Gene graphs (§4.1), we closely follow the settings of our synthetic experiments and observe consistent but more detailed results supporting the design of EGI and the utility of our theoretical analysis. On the YAGO graphs (§4.2), we further evaluate EGI on the more generalized and practical setting of transfer learning with taskspecific finetuning. We find our theoretical insights still indicative in such scenarios, where EGI consistently outperforms stateoftheart GNN representation and transfer learning frameworks with significant margins."
},
{
"heading": "2 Related Work",
"text": "Representation learning on graphs has been studied for decades, with earlier spectralbased methods [6, 46, 52] theoretically grounded but hardly scaling up to graphs with over a thousand of nodes. With the emergence of neural networks, unsupervised network embedding methods based on the Skipgram objective [37] have replenished the field [51, 14, 42, 45, 66, 62, 65]. Equipped with efficient structural sampling (random walk, neighborhood, etc.) and negative sampling schemes, these methods are easily parallelizable and scalable to graphs with thousands to millions of nodes. However, these models are essentially transductive as they compute fully parameterized embeddings only for nodes seen during training, which are impossible to be transfered to unseen graphs.\nMore recently, researchers introduce the family of graph neural networks (GNNs) that are capable of inductive learning and generalizing to unseen nodes given meaningful node features [29, 12, 15, 67]. Yet, most existing GNNs require taskspecific labels for training in a semisupervised fashion to achieve satisfactory performance [29, 15, 53, 64], and their usage is limited to single graphs where the downstream task is fixed. To this end, several unsupervised GNNs are presented, such as the autoencoderbased ones like VGAE [28] and GNFs [35], as well as the deepinfomaxbased ones like DGI [54] and InfoGraph [50]. Their potential in the transfer learning of GNN remains unclear when the node features and link structures vary across different graphs.\nAlthough the architectures of popular GNNs such as GCN [29] may not be very complicated compared with heavy vision and language models, training a dedicated GNN for each graph can still\nbe cumbersome [10, 63]. Moreover, as pretraining neural networks are proven to be successful in other domains [13, 18], the idea is intriguing to transfer welltrained GNNs from relevant source graphs to improve the modeling of target graphs or enable fewshot learning [59, 31, 3] when labeled data are scarce. In light of this, pioneering works have studied both generative [22] and discriminative [21, 23] GNN pretraining schemes. Though Graph Contrastive Coding [43] shares the most similar view towards graph structures as us, it utilizes contrastive learning across all graphs instead of focusing on the transfer learning between any specific pairs. On the other hand, unsupervised domain adaptive GCNs [59] study the domain adaption problem only when the source and target tasks are homogeneous.\nMost previous pretraining and selfsupervised GNNs lack a rigorous analysis towards their transferability and thus have unpredictable effectiveness. The only existing theoretical work on GNN transferability studies the performance of GNNs across different permutations of a single original graph [33, 34] and the tradeoff between discriminability and transferability of GNNs [47]. We, instead, are the first to rigorously study the more practical setting of transferring GNNs across pairs of different source and target graphs."
},
{
"heading": "3 Transferable Graph Neural Networks",
"text": "In this paper, we design a more transferable training objective for GNN (EGI) based on our novel view of essential graph information (§3.1). We then analyze its transferability as the gap between its abilities to model the source and target graphs, based on their local graph Laplacians (§3.2).\nBased on the connection between GNN and spectral graph theory [29], we describe the output of a GNN as a combination of its input node features X , fixed graph Laplacian L and learnable graph filters Ψ. The goal of training a GNN is then to improve its utility by learning the graph filters that are compatible with the other two components towards specific tasks.\nIn the graph transfer learning setting where downstream tasks are often unknown during pretraining, we argue that the general utility of a GNN should be optimized and quantified w.r.t. its ability of capturing the essential graph information in terms of the joint distribution of its topology structures and node features, which motivates us to design a novel egograph information maximization model (EGI) (§3.1). The general transferability of a GNN is then quantified by the gap between its abilities to model the source and target graphs. Under reasonable requirements such as using structurerespecting node features as the GNN input, we analyze this gap for EGI based on the structural difference between two graphs w.r.t. their local graph Laplacians (§3.2)."
},
{
"heading": "3.1 Transferable GNN via Egograph Information Maximization",
"text": "In this work, we focus on the directtransfering setting where a GNN is pretrained on a source graph Ga in an unsupervised fashion and applied on a target graph Gb without finetuning.2 Consider a graph G = {V,E}, where the set of nodes V are associated with certain features X and the set of edges E form graph structures. Intuitively, the transfer learning will be successful only if both the features and structures of Ga and Gb are similar in some ways, so that the graph filters of a GNN learned on Ga are compatible with the features and structures of Gb.\nGraph kernels [57, 8, 30, 38] are wellknown for their capability of measuring similarity between pair of graphs. Motivated by khop subgraph kernels [4], we introduce a novel view of a graph as samples from the joint distribution of its khop egograph structures and node features. Since GNN essentially encodes such khop ego graph samples, this view allows us to give concrete definitions towards structural information of graphs in the transfer learning setting, which facilitates the measuring of similarity (difference) among graphs. Yet, none of the existing GNN training objectives are capable of recovering such distributional signals of ego graphs. To this end, we design EgoGraph Information maximization (EGI), which alternatively reconstructs the khop egograph of each center node via mutual information maximization [20].\nDefinition 3.1 (Khop egograph). We call a graph gi = {V (gi), E(gi)} a khop egograph centered at node vi if it has a klayer centroid expansion [4] such that the greatest distance between vi and\n2In the experiments, we show our model to be generalizable to the more practical settings with taskspecific pretraining and finetuning, while the study of rigorous bound in such scenarios is left as future work.\nany other nodes in the egograph is k, i.e. ∀vj ∈ V (gi), d(vi, vj) ≤ k, where d(vi, vj) is the graph distance between vi and vj .\nIn this paper, we use directed khop egograph and its direction is decided by whether it is composed of incoming or outgoing edges to the center node, i.e., gi and g̃i. The results apply trivially to undirected graphs with gi = g̃i.\nDefinition 3.2 (Structural information). Let G be a topological space of subgraphs, we view a graph G as samples of khop egographs {gi}ni=1 drawn i.i.d. from G with probability µ, i.e., gi\ni.i.d.∼ µ ∀i = 1, · · · , n. The structural information of G is then defined to be the set of khop egographs of {gi}ni=1 and their empirical distribution.\nAs shown in Figure 1, three graphs G0, G1 and G2 are characterized by a set of 1hop egographs and their empirical distributions, which allows us to quantify the structural similarity among graphs as shown in §3.2 (i.e., G0 is more similar to G1 than G2 under such characterization). In practice, the nodes in a graph G are characterized not only by their khop egograph structures but also their associated node features. Therefore, G should be regarded as samples {(gi, xi)} drawn from the joint distribution P on the product space of G and a node feature space X .\nEgoGraph Information Maximization. Given a set of egographs {(gi, xi)}i drawn from an empirical joint distribution (gi, xi) ∼ P. We aim to train an GNN encoder Ψ to maximize the mutual informaion (MI (gi,Ψ(gi, xi))) between the defined structural information gi3 (i.e. khop egograph) and node embedding zi = Ψ(gi, xi). To maximize the MI, another discriminator D(gi, zi) : E(gi)× zi → R+ is introduced to compute the probability of an edge e belongs to the given egograph gi. We use the JensenShannon MI estimator [20] in the EGI objective,\nLEGI = −MI(JSD) (G,Ψ) = 1N N∑ i=1 [sp (D(gi, z′i)) + sp (−D(gi, zi))] , (1)\nwhere sp(x) = log(1+ex) is the softplus function and (gi, z′i) is randomly drawn from the product of marginal distributions, i.e. z′i = Ψ(gi′ , xi′), (gi′ , xi′) ∼ P, i′ 6= i. In general, we can also randomly draw negative g′i in the topological space, while enumerating all possible graphs gi′ leads to high computation cost.\nIn Eq. 1, the computation of D on E(gi) depends on the node orders. Following the common practice in graph generation [70], we characterize the decision process of D with a fixed graph ordering, i.e., the BFSordering π over edges E(gi). D = f ◦ Φ is composed by another GNN encoder Φ and scoring function f over an edge sequence Eπ : {e1, e2, ..., en}, which makes predictions on the BFSordered edges.\n3Later in section 3.2, we will discuss the equivalence between MI(gi, zi) and MI((gi, xi), zi) when node feature is structurerespecting.\nRecall our previous definition on the direction of khop egograph, the center node encoder Ψ receives pairs of (gi, xi) while the neighbor node encoder Φ in discriminator D receives (g̃i, xi). Both encoders are parameterized as GNNs,\nΨ(gi, xi) = GNNΨ(Ai, Xi),Φ(g̃i, xi) = GNNΦ(A′i, Xi),\nwhere Ai, A′i is the adjacency matrix with selfloops of gi and g̃i, respectively. The selfloops are added following the common design of GNNs, which allows the convolutional node embeddings to always incorporate the influence of the center node. Ai = A′i\nᵀ. The output of Ψ, i.e., zi ∈ Rn, is the center node embedding, while Φ outputs representation H ∈ Rgi×n for neighbor nodes in the egograph.\nOnce node representation H is computed, we now describe the scoring function f . For each of the node pair (p, q) ∈ Eπ, hp is the source node representation from Φ, xq is the destination node features. The scoring function is,\nf(hp, xq, zi) = σ ( UT · τ ( WT [hpxqzi] )) , (2)\nwhere σ and τ are Sigmoid and ReLU activation functions. Thus, the discriminator D is asked to distinguish a positive ((p, q), zi) and negative pair ((p, q), z′i)) for each edge in gi.\nD(gi, zi) = ∑\n(p,q)∈Eπ log f(hp, xq, zi), D(gi, z′i) = Eπ∑ (p,q) log f(hp, xq, z ′ i). (3)\nThere are two types of edges (p, q) in our consideration of node orders, typea  the edges across different hops (from the center node), and typeb  the edges within the same hop (from the center node). The aforementioned BFSbased node ordering guarantees that Eq. 3 is sensitive to the ordering of typea edges, and invariant to the ordering of typeb edges, which is consistent with the requirement of our theoretical analysis on ∆D. Due to the fact that the output of a klayer GNN only depends on a khop egograph for both encoders Ψ and Φ, EGI can be trained in parallel by sampling batches of gi’s. Besides, the training objective of EGI is transferable as long as (gi, xi) across source graph Ga and Gb satisfies the conditions given in §3.2. More model details in Appendix §B and source code in the Supplementary Materials.\nConnection with existing work. To provide more insights into the EGI objective, we also present it as a dual problem of egograph reconstruction. Recall our definition of egograph mutual information MI(gi,Ψ(gi, xi)). It can be related to an egograph reconstruction loss R(giΨ(gi, xi)) as\nmax MI(gi,Ψ(gi, xi)) = H(gi)−H(giΨ(gi, xi)) ≤ H(gi)−R(giΨ(gi, xi)). (4) When EGI is maximizing the mutual information, it simultaneously minimizes the upper error bound of reconstructing an egograph gi. In this view, the key difference between EGI and VGAE [28] is they assume each edge in a graph to be observed independently during the reconstruction. While in EGI, edges in an egograph are observed jointly during the GNN decoding. Moreover, existing mutual information based GNNs such as DGI [54] and GMI [41] explicitly measure the mutual information between node features x and GNN output Ψ. In this way, they tend to capture node features instead of graph structures, which we deem more essential in graph transfer learning as discussed in §3.2.\nUse cases of EGI framework. In this paper, we focus on the classical domain adaption (directtransferring) setting [7], where no target domain labels are available and transferability is measured by the performance discrepancy without finetuning. In this setting, the transferability of EGI is theoretically guaranteed by Theorem 3.1. In §4.1, we validated this with the airport datasets. Beyond directtransferring, EGI is also useful in the more generalized and practical setting of transfer learning with finetuning, which we introduced in §4.2 and validated with the YAGO datasets. In this setting, the transferability of EGI is not rigorously studied yet, but is empirically shown promising.\nSupportive observations. In the first three columns of our synthetic experimental results (Table 1), in both cases of transfering GNNs between similar graphs (FF) and dissimilar graphs (BF), EGI significantly outperforms all competitors when using node degree onehot encoding as transferable node features. In particular, the performance gains over the untrained GIN show the effectiveness of training and transfering, and our gains are always larger than the two stateoftheart unsupervised GNNs. Such results clearly indicate advantageous structure preserving capability and transferability of EGI."
},
{
"heading": "3.2 Transferability analysis based on local graph Laplacians",
"text": "We now study the transferability of a GNN (in particular, with the training objective of LEGI) between the source graph Ga and target graph Gb based on their graph similarity. We firstly establish the requirement towards node features, under which we then focus on analyzing the transferability of EGI w.r.t. the structural information of Ga and Gb.\nRecall our view of the GNN output as a combination of its input node features, fixed graph Laplacian and learnable graph filters. The utility of a GNN is determined by the compatibility among the three. In order to fulfill such compatibility, we require the node features to be structurerespecting: Definition 3.3 (Structurerespecting node features). Let gi be an ordered egograph centered on node vi with a set of node features {xip,q} k,Vp(gi) p=0,q=1 , where Vp(gi) is the set of nodes in pth hop of gi. Then we say the node features on gi are structurerespecting if xip,q = [f(gi)]p,q ∈ Rd for any node vq ∈ Vp(gi), where f : G → Rd×V (gi) is a function. In the strict case, f should be injective.\nIn its essence, Def 3.3 requires the node features to be a function of the graph structures, which is sensitive to changes in the graph structures, and in an ideal case, injective to the graph structures (i.e., mapping different graphs to different features). In this way, when the learned graph filters of a transfered GNN is compatible to the structure of G, they are also compatible to the node features of G. As we will explain in Remark 2 of Theorem 3.1, this requirement is also essential for the analysis of EGI transferability which eventually only depends on the structural difference between two graphs.\nIn practice, commonly used node features like node degrees, PageRank scores [40], spectral embeddings [11], and many precomputed unsupervised network embeddings [42, 51, 14] are all structurerespecting in nature. However, other commonly used node features like random vectors [68] or uniform vectors [60] are not and thus nontransferable. When raw node attributes are available, they are transferable as long as the concept of homophily [36] applies, which also implies Def 3.3, but we do not have a rigorous analysis on it yet.\nSupportive observations. In the fifth and sixth columns in Table 1, where we use same fixed vectors as nontransferable node features to contrast with the first three columns, there is almost no transferability (see δ(acc.)) for all compared methods when nontransferable features are used, as the performance of trained GNNs are similar to or worse than their untrained baselines. More detailed experiments on different transferable and nontransferable features can be found in Appendix §C.1.\nWith our view of graphs and requirement on node features both established, now we derive the following theorem by characterizing the performance difference of EGI on two graphs based on Eq. 1. Theorem 3.1 (GNN transferability). Let Ga = {(gi, xi)}ni=1 and Gb = {(gi′ , xi′)}mi′=1 be two graphs, and assume node features are structurerelevant. Consider GCN Ψθ with k layers and a 1hop polynomial filter φ. With reasonable assumptions on the local spectrum of Ga and Gb, the empirical performance difference of Ψθ evaluated on LEGI satisfies\nLEGI(Ga)− LEGI(Gb) ≤ O (∆D(Ga, Gb) + C) . (5) On the RHS, C is only dependent on the graph encoders and node features, while ∆D(Ga, Gb) measures the structural difference between the source and target graphs as follows,\n∆D(Ga, Gb) = C̃ 1\nnm n∑ i=1 m∑ i′=1 λmax(L̃gi − L̃gi′ ) (6)\nwhere λmax(A) := λmax(ATA)1/2, and L̃gi denotes the normalised graph Laplacian of g̃i by its indegree. C̃ is a constant dependant on λmax(L̃gi) and D.\nProof. The full proof is detailed in Appendix §A.\nThe analysis in Theorem 3.1 naturally instantiates our insight about the correspondence between structural similarity and GNN transferability. It allows us to tell how well an EGI trained on Ga can work on Gb by only checking the local graph Laplacians of Ga and Gb without actually training any model. In particular, we define the EGI gap as ∆D in Eq. 6, as other term C is the same for different methods using same GNN encoder. It can be computed to bound the transferability of EGI regarding its loss difference on the source and target graphs.\nRemark 1. Our view of a graph G as samples of khop egographs is important, as it allows us to obtain nodewise characterization of GNN similarly as in [55]. It also allows us to set the depth of egographs in the analysis to be the same as the number of GNN layers (k), since the GNN embedding of each node mostly depends on its khop egograph instead of the whole graph. Remark 2. For Eq. 1, Def 3.3 ensures the sampling of GNN embedding at a node always corresponds to sampling an egograph from G, which reduces to uniformly sampling from G = {gi}ni=1 under the setting of Theorem 3.1. Therefore, the requirement of Def 3.3 in the context of Theorem 3.1 guarantees the analysis to be only depending on the structural information of the graph.\nSupportive observations. In Table 1, in the d̄ columns, we compute the average structural difference between two Forestfire graphs (∆D(F,F)) and between Barabasi and Forestfire graphs (∆D(B,F)), based on the RHS of Eq. 5. The results validate the topological difference between graphs generated by different randomgraph models, while also verifying our view of graph as khop egograph samples and the way we propose based on it to characterize structural information of graphs. We further highlight in the δ(acc) columns the accuracy difference between the GNNs transfered from Forestfire graphs and Barabasi graphs to Forestfire graphs. Since Forestfire graphs are more similar to Forestfire graphs than Barabasi graphs (as verified in the ∆D columns), we expect δ(acc.) to be positive and large, indicating more positive transfer between the more similar graphs. Indeed, the behaviors of EGI align well with the expectation, which indicates its wellunderstood transferability and the utility of our theoretical analysis.\nUse cases of Theorem 3.1. Our Theorem 3.1 naturally allows for two practical use cases among many others: pointwise prejudge and pairwise preselection for EGI pretraining. Suppose we have a target graph Gb which does not have sufficient training labels. In the first setting, we have a single source graph Ga which might be useful for pretraining a GNN to be used on Gb. The EGI gap ∆D(Ga, Gb) in Eq. 6 can then be computed between Ga and Gb to prejudge whether such transfer learning would be successful before any actual GNN training (i.e., yes if ∆D(Ga, Gb) is empirically much smaller than 1.0; no otherwise). In the second setting, we have two or more source graphs {G1a, G2a, . . .} which might be useful for pretraining the GNN. The EGI gap can then be computed between every pair of Gia and Gb to preselect the best source graph (i.e., select the one with the least EGI gap).\nIn practice, the computation of eigenvalues on the small egographs can be rather efficient [2], and we do not need to enumerate all pairs of egographs on two compared graphs especially if the graphs are really large (e.g., with more than a thousand nodes). Instead, we can randomly sample pairs of egographs from the two graphs, update the average difference onthefly, and stop when it converges. Suppose we need to sample M pairs of khop egographs to compare two large graphs, and the average size of egographs are L, then the overall complexity of computing Eq. 5 is O(ML2), where M is often less than 1K and L less than 50. In Appendix §C.4, we report the approximated ∆D’s w.r.t. different sampling frequencies, and they are indeed pretty close to the actual value even with smaller sample frequencies, showing the feasible efficiency of computing ∆D through sampling.\nLimitations. EGI is designed to account for the structural difference captured by GNNs (i.e., khop egographs). The effectiveness of EGI could be limited if the tasks on target graphs depend on different structural signals. For example, as Eq. 6 is computing the average pairwise distances between the graph Laplacians of local egographs, ∆D is possibly less effective in explicitly capturing global graph properties such as numbers of connected components (CCs). In some specific tasks (such as counting CCs or community detection) where such properties become the key factors, ∆D may fail to predict the transferability of GNNs."
},
{
"heading": "4 Real Data Experiments",
"text": "Baselines. We compare the proposed model against existing selfsupervised GNNs and pretraining GNN algorithms. To exclude the impact of different GNN encoders Ψ on transferability, we always use the same encoder architecture for all compared methods (i.e., GIN [60] for directtransfering experiments, GCN [29] for transfering with finetuning).\nThe selfsupervised GNN baselines are GVAE [28], DGI [54] and two latest mutual information estimation methods GMI [41] and MVC [17]. As for pretraining GNN algorithms, MaskGNN\nand ContextPredGNN are two nodelevel pretraining models proposed in [21] Besides, Structural Pretrain [23] also conducts unsupervised nodelevel pretraining with structural features like node degrees and clustering coefficients.\nExperimental Settings. The main hyperparameter k is set 2 in EGI as a common practice. We use Adam [27] as optimizer and learning rate is 0.01. We provide the experimental result with varying k in the Appendix §C.4. All baselines are set with the default parameters. Our experiments were run on an AWS g4dn.2xlarge machine with 1 Nvidia T4 GPU. By default, we use node degree onehot encoding as the transferable feature across all different graphs. As stated before, other transferable features like spectral and other precomputed node embeddings are also applicable. We focus on the setting where the downstream tasks on target graphs are unspecified but assumed to be structurerelevant, and thus pretrain the GNNs on source graphs in an unsupervised fashion.4 In terms of evaluation, we design two realistic experimental settings: (1) Directtransfering on the more structurerelevant task of role identification without given node features to directly evaluate the utility and transferability of EGI. (2) Fewshot learning on relation prediction with taskspecific node features to evaluate the generalization ability of EGI."
},
{
"heading": "4.1 Directtransfering on role identification",
"text": "First, we use the role identification without node features in a directtransfering setting as a reliable proxy to evaluate transfer learning performance regarding different pretraining objectives. Role in a network is defined as nodes with similar structural behaviors, such as clique members, hub and bridge [19]. Across graphs in the same domain, we assume the definition of role to be consistent, and the task of role identification is highly structurerelevant, which can directly reflect the transferability of different methods and allows us to conduct the analysis according to Theorem 3.1. Upon convergence of pretraining each model on the source graphs, we directly apply them to the target graphs and further train a multilayer perceptron (MLP) upon their outputs. The GNN parameters are frozen during the MLP training. We refer to this strategy as directtransfering since there is no finetuning of the models after transfering to the target graphs.\nWe use two realworld network datasets with rolebased node labels: (1) Airport [45] contains three networks from different regions– Brazil, USA and Europe. Each node is an airport and each link is the flight between airports. The airports are assigned with external labels based on their level of popularity. (2) Gene [68] contains the gene interactions regarding 50 different cancers. Each gene has a binary label indicating whether it is a transcription factor. More details about the results and dataset can be found in Appendix C.2.\nThe experimental setup on the Airport dataset closely resembles that of our synthetic experiments in Table 1, but with real data and more detailed comparisons. We train all models (except for the untrained ones) on the Europe network, and test them on all three networks. The results are presented in Table 2. We notice that the node degree features themselves (with MLP) show reasonable performance in all three networks, which is not surprising since the popularitybased airport role labels are highly relevant to node degrees. The untrained GIN encoder yields a significant margin over just node features, as GNN encoder incorporates structural information to node representations.\n4The downstream tasks are unspecified because we aim to study the general transferability of GNNs that is not bounded to specific tasks. Nevertheless, we assume the tasks to be relevant to graph structures.\nWhile training of the DGI can further improve the performance on the source graph, EGI shows the best performance there with the structurerelevant node degree features, corroborating the claimed effectiveness of EGI in capturing the essential graph information (i.e. recover the khop egograph distributions) as we stress in §3.\nWhen transfering the models to USA and Brazil networks, EGI further achieves the best performance compared with all baselines when structure relevant features are used (64.55 and 73.15), which reflects the most significant positive transfer. Interestingly, direct application of GVAE, DGI and MVC that do not capture the input khop graph jointly, leads to rather limited and even negative transferrability (through comparison against the untrained GIN encoders). The recently proposed transfer learning frameworks for GNN like MaskGNN and Structural Pretrain are able to mitigate negative transfer to some extent, but their performances are still inferior to EGI. We believe this is because their models are prone to learn the graphspecific information that is less transferable across different graphs. GMI is also known to capture the graph structure and node features, so it achieves second best result comparing with EGI.\nSimilarly as in Table 1, we also compute the structural differences among three networks w.r.t. the EGI gap in Eq. 6. The structural difference is 0.869 between the Europe and USA networks, and 0.851 between the Europe and Brazil datasets, which are pretty close. Consequently, the transferability of EGI regarding its performance gain over the untrained GIN baseline is 4.8% on the USA network and 4.4% on the Brazil network, which are also close. Such observations again align well with our conclusion in Theorem 3.1 that the transferability of EGI is closely related to the structural differences between source and target graphs.\nOn the Gene dataset, with more graphs available, we focus on EGI to further validate the utility of Eq. 5 in Theorem 3.1, regarding the connection between the EGI gap (Eq. 6) and the performance gap (microF1) of EGI on them. Due to severe label imbalance that removes the performance gaps, we only use the seven brain cancer networks that have a more consistent balance of labels. As shown in Figure 3, we train EGI on one graph and test it on the other graphs. The xaxis shows the EGI gap, and yaxis shows the improvement on microF1 compared with an untrained GIN. The negative correlation between two quantities is obvious. Specifically, when the structural difference is smaller than 1, positive transfer is observed (upper left area) as the performance of transferred EGI is better than untrained GIN, and when the structural difference becomes large (> 1), negative transfer is observed. We also notice a similar graph pattern, i.e. single dense cluster, between source graph and positive transferred target graph G2."
},
{
"heading": "4.2 Fewshot learning on relation prediction",
"text": "Here we evaluate EGI in the more generalized and practical setting of fewshot learning on the less structurerelevant task of relation prediction, with taskspecific node features and finetuning. The source graph contains a cleaned full dump of 579K entities from YAGO [49], and we investigate 20 shot relation prediction on a target graph with 24 relation types, which is a subgraph of 115K entities sampled from the same dump. In postfinetuning, the models are pretrained with an unsupervised loss on the source graph and finetuned with the taskspecific loss on the target graph. In jointfinetuning, the same pretrained models are jointly optimized w.r.t. the unsupervised pretraining loss\nand taskspecific finetuning loss on the target graph. In Table 3, we observe most of the existing models fail to transfer across pretraining and finetuning tasks, especially in the jointfinetuning setting. In particular, both MaskGIN and ContextPredGIN rely a lot on taskspecific finetuning, while EGI focuses on the capturing of similar egograph structures that are transferable across graphs. The mutual information based method GMI also demonstrates considerable transferability and we believe the ability to capture the graph structure is the key to the transferability. As a consequence, EGI significantly outperforms all compared methods in both settings. More detailed statistics and running time are in Appendix §C.3."
},
{
"heading": "5 Conclusion",
"text": "To the best of our knowledge, this is the first research effort towards establishing a theoretically grounded framework to analyze GNN transferability, which we also demonstrate to be practically useful for guiding the design and conduct of transfer learning with GNNs. For future work, it is intriguing to further strengthen the bound with relaxed assumptions, rigorously extend it to the more complicated and less restricted settings regarding node features and downstream tasks, as well as analyze and improve the proposed framework over more transfer learning scenarios and datasets. It is also important to protect the privacy of pretraining data to avoid potential negative societal impacts.\nAcknowledgments and Disclosure of Funding\nResearch was supported in part by US DARPA KAIROS Program No. FA87501921004, SocialSim Program No. W911NF17C0099, and INCAS Program No. HR001121C0165, National Science Foundation IIS1956151, IIS1741317, and IIS 1704532, and the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897. Chao Zhang is supported NSF IIS2008334, IIS2106961, and ONR MURI N000141712656. We would like to thank AWS Machine Learning Research Awards program for providing computational resources for the experiments in this paper. This work is also partially supported by the internal funding and GPU servers provided by the Computer Science Department of Emory University. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of DARPA or the U.S. Government."
}
]
 2,022
 Transfer Learning of Graph Neural Networks with Egograph Information Maximization

SP:40cba7b6c04d7e44709baed351382c27fa89a129
 [
"The authors perform a descriptive analysis of data by attempting to identify elements in the partial ordering of all partitions on the data which admit a compact definition. Compact definitions are those that are formed by composition of a small number of predefined (prior) set of mathematical operations. Projection and lifting operations are defined to relate descriptions of partition cells to one another through rules. The quality of a description is measured by the divergence between the data and the (special) lifting of the rule set, under the constraint that rules satisfy an upper bound on their entropy."
]
 Information Lattice Learning (ILL) is a general framework to learn decomposed representations, called rules, of a signal such as an image or a probability distribution. Each rule is a coarsened signal used to gain some humaninterpretable insight into what might govern the nature of the original signal. To summarize the signal, we need several disentangled rules arranged in a hierarchy, formalized by a lattice structure. ILL focuses on explainability and generalizability from “small data”, and aims for rules akin to those humans distill from experience (rather than a representation optimized for a specific task like classification). This paper focuses on a mathematical and algorithmic presentation of ILL, then demonstrates how ILL addresses the core question “what makes X an X” or “what makes X different from Y” to create effective, rulebased explanations designed to help human learners understand. The key part here is what rather than tasks like generating X or predicting labels X,Y. Typical applications of ILL are presented for artistic and scientific knowledge discovery. These use ILL to learn music theory from scores and chemical laws from molecule data, revealing relationships between domains. We include initial benchmarks and assessments for ILL to demonstrate efficacy.
 []
 [
{
"authors": [
"Amina Adadi",
"Mohammed Berrada"
],
"title": "Peeking inside the blackbox: A survey on explainable artificial intelligence (XAI)",
"venue": "IEEE Access,",
"year": 2018
},
{
"authors": [
"Yoshua Bengio",
"Aaron Courville",
"Pascal Vincent"
],
"title": "Representation learning: A review and new perspectives",
"venue": "IEEE Trans. Pattern Anal. Mach. Intell.,",
"year": 2013
},
{
"authors": [
"Karell Bertet",
"Michel Morvan"
],
"title": "Computing the sublattice of a lattice generated by a set of elements",
"venue": "In Proc. 3rd Int. Conf. Orders, Algorithms Appl.,",
"year": 1999
},
{
"authors": [
"Karell Bertet",
"Michel Morvan",
"Lhouari Nourine"
],
"title": "Lazy completion of a partial order to the smallest lattice",
"venue": "In Proc. 2nd Int. Symp. Knowl. Retr., Use and Storage for Effic. (KRUSE",
"year": 1997
},
{
"authors": [
"Christina Bodurow"
],
"title": "Music and chemistry—what’s the connection",
"venue": "Chem. Eng. News,",
"year": 2018
},
{
"authors": [
"Nathalie Caspard",
"Bruno Leclerc",
"Bernard Monjardet"
],
"title": "Finite Ordered Sets: Concepts, Results and Uses. Number 144 in Encyclopedia of Mathematics and its Applications",
"venue": null,
"year": 2012
},
{
"authors": [
"Gregory J Chaitin"
],
"title": "Algorithmic Information Theory",
"venue": null,
"year": 1987
},
{
"authors": [
"Nick Chater",
"Paul Vitányi"
],
"title": "Simplicity: A unifying principle in cognitive science",
"venue": "Trends Cogn. Sci.,",
"year": 2003
},
{
"authors": [
"François Chollet"
],
"title": "On the measure of intelligence",
"venue": "arXiv:1911.01547v2 [cs.AI],",
"year": 2019
},
{
"authors": [
"Erhan Çınlar"
],
"title": "Probability and Stochastics, volume 261",
"venue": "Springer Science & Business Media,",
"year": 2011
},
{
"authors": [
"Thomas M Cover",
"Joy A Thomas"
],
"title": "Elements of Information Theory",
"venue": null,
"year": 2012
},
{
"authors": [
"Constantinos Daskalakis",
"Richard M Karp",
"Elchanan Mossel",
"Samantha J Riesenfeld",
"Elad Verbin"
],
"title": "Sorting and selection in posets",
"venue": "SIAM J. Comput.,",
"year": 2011
},
{
"authors": [
"Brian A Davey",
"Hilary A Priestley"
],
"title": "Introduction to Lattices and Order",
"venue": null,
"year": 2002
},
{
"authors": [
"Benjamin Eva"
],
"title": "Principles of indifference",
"venue": "J. Philos.,",
"year": 2019
},
{
"authors": [
"Ruma Falk",
"Clifford Konold"
],
"title": "Making sense of randomness: Implicit encoding as a basis for judgment",
"venue": "Psychol. Rev.,",
"year": 1997
},
{
"authors": [
"Bernhard Ganter",
"Rudolf Wille"
],
"title": "Formal Concept Analysis: Mathematical Foundations",
"venue": "Springer Science & Business Media,",
"year": 2012
},
{
"authors": [
"Vijay K Garg"
],
"title": "Introduction to Lattice Theory with Computer Science Applications",
"venue": "Wiley Online Library,",
"year": 2015
},
{
"authors": [
"Lejaren Hiller",
"Leonard Maxwell Isaacson"
],
"title": "Illiac Suite, for String Quartet",
"venue": "New Music Edition,",
"year": 1957
},
{
"authors": [
"Steven Holtzen",
"Todd Millstein",
"Guy Van den Broeck"
],
"title": "Generating and sampling orbits for lifted probabilistic inference",
"venue": "arXiv:1903.04672v3 [cs.AI],",
"year": 2019
},
{
"authors": [
"Anubhav Jain",
"Shyue Ping Ong",
"Geoffroy Hautier",
"Wei Chen",
"William Davidson Richards",
"Stephen Dacek",
"Shreyas Cholia",
"Dan Gunter",
"David Skinner",
"Gerbrand Ceder",
"Kristin A. Persson"
],
"title": "The Materials Project: A materials genome approach to accelerating materials innovation",
"venue": "APL Materials,",
"year": 2013
},
{
"authors": [
"Michael I Jordan"
],
"title": "Artificial intelligence—the revolution hasn’t happened yet",
"venue": "Harvard Data Science Review,",
"year": 2019
},
{
"authors": [
"David Kaiser",
"Jonathan Moreno"
],
"title": "Selfcensorship is not",
"venue": "enough. Nature,",
"year": 2012
},
{
"authors": [
"Martin Kauer",
"Michal Krupka"
],
"title": "Subsetgenerated complete sublattices as concept lattices",
"venue": "In Proc. 12th Int. Conf. Concept Lattices Appl., pp",
"year": 2015
},
{
"authors": [
"Kristian Kersting"
],
"title": "Lifted probabilistic inference",
"venue": "In Proc. 20th European Conf. Artif. Intell. (ECAI",
"year": 2012
},
{
"authors": [
"Risi Kondor",
"Shubhendu Trivedi"
],
"title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups",
"venue": "[stat.ML],",
"year": 2018
},
{
"authors": [
"Holbrook Mann MacNeille"
],
"title": "Partially ordered sets",
"venue": "Trans. Am. Math. Soc.,",
"year": 1937
},
{
"authors": [
"Gary Marcus"
],
"title": "Innateness, AlphaZero, and artificial intelligence",
"venue": "[cs.AI],",
"year": 2018
},
{
"authors": [
"Christoph Molnar"
],
"title": "Interpretable Machine Learning",
"venue": "Lulu.com,",
"year": 2019
},
{
"authors": [
"Andreas D Pape",
"Kenneth J Kurtz",
"Hiroki Sayama"
],
"title": "Complexity measures and concept learning",
"venue": "J. Math. Psychol.,",
"year": 2015
},
{
"authors": [
"Uta Priss"
],
"title": "Formal concept analysis in information science",
"venue": "Ann. Rev. Inform. Sci. Tech.,",
"year": 2006
},
{
"authors": [
"Anna Rogers",
"Olga Kovaleva",
"Anna Rumshisky"
],
"title": "A primer in BERTology: What we know about how BERT works",
"venue": "[cs.CL],",
"year": 2020
},
{
"authors": [
"Andrew D Selbst",
"Danah Boyd",
"Sorelle A Friedler",
"Suresh Venkatasubramanian",
"Janet Vertesi"
],
"title": "Fairness and abstraction in sociotechnical systems",
"venue": "In Proc. Conf. Fairness, Account., and Transpar.,",
"year": 2019
},
{
"authors": [
"Claude Shannon"
],
"title": "The lattice theory of information",
"venue": "Trans. IRE Prof. Group Inf. Theory,",
"year": 1953
},
{
"authors": [
"Charles Percy Snow"
],
"title": "The Two Cultures",
"venue": null,
"year": 1959
},
{
"authors": [
"Harini Suresh",
"John V Guttag"
],
"title": "A framework for understanding unintended consequences of machine learning",
"venue": "[cs.LG],",
"year": 2019
},
{
"authors": [
"Dmitri Tymoczko"
],
"title": "A Geometry of Music: Harmony and Counterpoint in the Extended Common Practice",
"venue": null,
"year": 2010
},
{
"authors": [
"Haizi Yu",
"Lav R. Varshney"
],
"title": "Towards deep interpretability (MUSROVER II): Learning hierarchical representations of tonal music",
"venue": "In Proc. 5th Int. Conf. Learn. Represent",
"year": 2017
},
{
"authors": [
"Haizi Yu",
"Lav R Varshney",
"Guy E Garnett",
"Ranjitha Kumar"
],
"title": "MUSROVER: A selflearning system for musical compositional rules",
"venue": "In Proc. 4th Int. Workshop Music. Metacreation (MUME",
"year": 2016
},
{
"authors": [
"Haizi Yu",
"Tianxi Li",
"Lav R Varshney"
],
"title": "Probabilistic rule realization and selection",
"venue": "In Proc. 31th Annu. Conf. Neural Inf. Process. Syst. (NeurIPS 2017),",
"year": 2017
},
{
"authors": [],
"title": "PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many",
"venue": null,
"year": 2016
},
{
"authors": [
"Sokol (Sokol"
],
"title": "2016), a music professor at York University, which we quote below: “The idea of Figured Soprano is simply a way of taking this thinking from the topdown and bringing it into greater prominence as a creative gesture. So these exercises are not anything new in their ideation, but they can bring many new ideas, chord progressions and much else. It’s a somewhat neglected area of harmonic study and it’s a lot of fun to play with.",
"venue": null,
"year": 2016
},
{
"authors": [
"transparency",
"explainability"
],
"title": "Extensions to ILL could enable it to better cooperate with other models, e.g., as a preprocessing or a postinterpretation tool to achieve superior task performance as well as controllability and interpretability. One such possibility could leverage ILL to analyze the attention matrices (as signals) learned from a Transformerbased NLP model like BERT or GPT (Rogers et al., 2020)",
"venue": null,
"year": 2020
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "With rapid progress in AI, there is an increasing desire for general AI (Goertzel & Pennachin, 2007; Chollet, 2019) and explainable AI (Adadi & Berrada, 2018; Molnar, 2019), which exhibit broad, humanlike cognitive capacities. One common pursuit is to move away from “black boxes” designed for specific tasks to achieve broad generalization through strong abstractions made from only a few examples, with neither unlimited priors nor unlimited data (“primitive priors” & “small data” instead). In this pursuit, we present a new, tasknonspecific framework—Information Lattice Learning (ILL)— to learn representations akin to humandistilled rules, e.g., producing much of a standard music theory curriculum as well as new rules in a form directly interpretable by students (shown at the end).\nThe term information lattice was first defined by Shannon (1953), but remains largely conceptual and unexplored. In the context of abstraction and representation learning, we independently develop representation lattices that coincide with Shannon’s information lattice when restricted to his context. Instead of inventing a new name, we adopt Shannon’s. However, we not only generalize the original definition—an information lattice here is a hierarchical distribution of representations—but we also bring learning into the lattice, yielding the name ILL.\nILL explains a signal (e.g., a probability distribution) by disentangled representations, called rules. A rule explains some but not all aspects of the signal, but together the collection of rules aims to capture a large part of the signal. ILL is specially designed to address the core question “what makes X an X” or “what makes X different from Y”, emphasizing the what rather than generating X or predicting labels X,Y in order to facilitate effective, rulebased explanations designed to help human learners understand. A music AI classifying concertos, or generating one that mimics the masters, does not necessarily produce human insight about what makes a concerto a concerto or the best rules a novice composer might employ to write one. Our focus represents a shift from much representationlearning work (Bengio et al., 2013) that aim to find the best representation for solving a specific task (e.g., classification) rather than strong concern for explainability. Instead of optimizing a taskspecific objective function (e.g., classification error), ILL balances more general objectives that favor fewer, simpler rules for interpretability, and more essential rules for effectiveness—all formalized later.\nOne intuition behind ILL is to break the whole into simple pieces, similar to breaking a signal into a Fourier series. Yet, rather than decomposition via projection to orthonormal basis and synthesis\nvia weighted sum, we decompose a signal in a hierarchical space called a lattice. Another intuition behind ILL is feature selection. Yet, rather than features, we use partitions to mimic human concepts and enable structured search in a partition lattice to mimic human learning. The goal is to restore humanlike, hierarchical rule abstractionandrealization through signal decompositionandsynthesis in a lattice (called projectionandlifting, Figure 1: left), resulting in more than a sum of parts.\nILL comprises two phases: (a) lattice construction; (b) learning (i.e., searching) in the lattice. This is similar to many machine learning (ML) models comprising (a) function class specification then (b) learning in the function class, e.g., constructing a neural network then learning—finding optimal parameters via backpropagation—in the network. ILL’s construction phase is priorefficient: it builds in universal priors that resemble human innate cognition (cf. the Core Knowledge priors (Spelke & Kinzler, 2007)), then grows a lattice of abstractions. The priors can be customized, however, to cater to a particular human learner, or facilitate more exotic knowledge discovery. ILL’s learning phase is dataefficient: it learns from “small data” encoded by a signal, but searches for rich explanations of the signal via rule learning, wherein abstraction is key to “making small data large”. Notably, the construction phase is priordriven, not datadriven—data comes in only at the learning phase. Hence, the same construction may be reused in different learning phases for different data sets or even data on different topics (Figure 1: right). Featuring these two phases, ILL is thus a hybrid model that threads the needle between a full datadriven model and a full priordriven model, echoing the notion of “starting like a baby; learning like a child” (Hutson, 2018).\nILL is related to many research areas. It draws ideas and approaches from lattice theory, information theory, group theory, and optimization. It shares algorithmic similarity with a range of techniques including MaxEnt, data compression, autoencoders, and compressed sensing, but with a much greater focus on achieving humanlike explainability and generalizability. Below, we broadly compares ILL to prominent, related models, leaving more comparisons to the Appendix for most similar ones.\nCompared to ILL is deep learning a “whitebox” model balancing humanexplainability and task performance Bayesian inference modeling human reasoning with widely shared, common priors and few, simple rules rather than using probabilistic inference as the driving force treelike models structurally more general: a tree (e.g., decision tree or hierarchical clustering)\nis essentially a linear lattice (called a chain formally) depicting a unidirectional refinement or coarsening process\nconcept lattice in FCA (Ganter & Wille, 2012) conceptually more general and may include both known and unknown concepts; ILL does not require but discovers domain knowledge (more details in Appendix A)\nWe illustrate ILL applications by learning music theory from scores, chemical laws from compounds, and show how ILL’s common priors facilitate mutual interpretation between the two subjects. To begin, imagine Tom and Jerry are playing two 12key pianos simultaneously, one note at a time (Figure 1: right). The frequency of the played twonote chords gives a 2D signal plotted as a 12× 12 grayscale heatmap. Inspecting this heatmap, what might be the underlying rules that govern their coplay? (Check: all grey pixels have a larger “Jerrycoordinate” and project to a black key along the “Tomaxis”.) We now elaborate on ILL and use it to distill rules for complex, realistic cases."
},
{
"heading": "2 INFORMATION LATTICE: ABSTRACTIONS AND RULES OF A SIGNAL",
"text": "Signal. A signal is a function ξ : X → R. For notational brevity and computational reasons, assume ξ is nonnegative and X ⊆ Rn is finite (not a limitation: see Appendix B). For example, a signal ξ : {1, . . . , 6} → R being a probability mass function (pmf) of a dice roll, or a signal ξ : {0, . . . , 27}2 → R being a 28× 28 grayscale image. We denote the set of all signals on X by SX . Partition / abstraction. We use a partition P of a set X to denote an abstraction of X; we call a cell C ∈ P an (abstracted) concept. The intuition is simple: a partition of a set renders a “coarsegrained view” of the set, or more precisely, an equivalence relation on the set. In this view, we identify equivalence classes of elements (concepts) instead of individual elements. For example, the partition P = {{1, 3, 5}, {2, 4, 6}} of the six outcomes of the roll of a die identify two concepts (odd, even). Rule / representation. A rule of a signal ξ : X → R is a “coarsened” signal rξ : P → R defined on a partition P of X with rξ(C) := ∑ x∈C ξ(x) for any C ∈ P . In this paper, a rule of a signal is what we mean by a representation of a signal. If the signal is a grayscale image, a rule can be a special type of blurring or downsampling of the image; if the signal is a probability distribution, a rule can be a pmf of the “orbits” of the distribution for lifted inference algorithms (Holtzen et al., 2019; Kersting, 2012). More generally, we define a rule (regardless of any signal) over a set X by any signal on any partition of X; accordingly, we denote the set of all rules over X byRX := ∪P∈{all partitions of X}SP . Partition lattice. Abstractions are hierarchical: one coarsegrained view can be coarser than another. Let the partition lattice (PX , ) of a setX be the partially ordered set (poset) containing all partitions of X equipped with the partial order coarser than ( ), or finer than ( ), defined in the standard way. Let P := {{x}  x ∈ X} and P := {X} denote the finest and the coarsest partition, respectively. Per general lattice theory (Davey & Priestley, 2002), PX is a complete lattice: every subset P ⊆ PX has a unique supremum ∨P and a unique infimum ∧P, where ∨P is called the join of P denoting its coarsest common refinement and ∧P is called the meet of P denoting its finest common coarsening. Information lattice. The information lattice (Rξ,⇐) of a signal ξ : X → R is the poset of all rules of ξ equipped with the partial order more general than: for any two rules r, r′ ∈ Rξ, we say r is more general than r′ (or r′ is more specific), denoted r ⇐ r′, if domain(r) domain(r′). Notably, Rξ ⊆ RX andRξ is isomorphic to the underlying partition lattice via projection defined below. Projection and lifting. For any signal ξ ∈ SX , we define the projection operator ↓ξ: PX → Rξ by letting ↓ξ (P) be the rule of ξ on P . One can check that ↓ξ: (PX , )→ (Rξ,⇐) is an isomorphism. Conversely, we define the general lifting operator ⇑X : RX → 2SX by letting ⇑X (r) denote the set of all signals that satisfy the rule r, i.e., ⇑X (r) := {ξ ∈ SX  ↓ξ (domain(r)) = r} ⊆ SX . To make lifting unique and per Principles of Indifference (Eva, 2019), we introduce a special lifting ↑X (r) to pick the most “uniform” signal in ⇑X (r). Formally, define ‖ · ‖q : SX → R by ‖ξ‖q := ( ∑ x∈X ξ(x)\nq)1/q. For any ξ, ξ′ ∈ SX satisfying ‖ξ‖1 = ‖ξ′‖1, we say that ξ is more uniform than ξ′ (or ξ′ is more deterministic) if ‖ξ‖2 ≤ ‖ξ′‖2. We define the (special) lifting operator ↑X : RX → SX by ↑X (r) := argminξ∈⇑X(r)‖ξ‖2 (can be computed by simply averaging). Notation here follows the convention as to function projections to quotient spaces (Kondor & Trivedi, 2018). Lifting a single rule to the signal domain can be extended in two ways: (a) lift to a finer rule domain P instead of X , i.e., ⇑P (r) or ↑P (r); (b) lift more than one rules. Accordingly, we write ⇑ := ⇑X and ↑ := ↑X as defaults, write R = ↓ξ (P) := {↓ξ (P)  P ∈ P} ⊆ Rξ to denote a rule set, and write ⇑(R) := ∩r∈R ⇑(r) = {η ∈ SX  ↓η (P) = R} and ↑(R) := argminη∈⇑(R)‖η‖2 to denote signals that satisfy all rules in R (general lifting) and the most uniform one (special lifting), respectively. More computational details on lifting and its intimate relation to join are in Appendix C."
},
{
"heading": "3 INFORMATION LATTICE LEARNING (ILL)",
"text": "We first formalize ILL as a single optimization problem and then solve it practically in two phases. Let ξ : X → R be a signal we want to explain. By explaining, we mean to search for a rule set R = ↓ξ (P) ⊆ Rξ such that: (a)R recovers ξ well, orR is essential; (b)R is simple. The main idea agrees with Algorithm Information Theory (Chaitin, 1987; Chater & Vitányi, 2003), but we use an informationlattice based formulation focusing on explainability. We start our formulation below.\nWe say a rule setR recovers the signal ξ exactly if ↑(R) = ξ. Yet, exact recovery may not always be achieved. The information loss occurs for two reasons: (a) insufficient abstractions, i.e., the join ∨P is strictly coarser than P; (b) the choice made in favor of uniformity is inappropriate. Instead of pursuing exact recovery, we introduce ∆(↑ (R), ξ)—a distance (e.g., `p distance) or a divergence (e.g., KL divergence) function—to measure the loss, with a smaller ∆ indicating a more essentialR. We say a rule setR is simpler if it contains fewer and simpler rules. Formally, we wantR minimal, i.e., each rule r ∈ R is indispensable so as to achieve the same ↑(R). Also, we want each rule r ∈ R informationally simple, measured by smaller Shannon entropy Ent(r), so r is more deterministic (Falk & Konold, 1997), easier to remember (Pape et al., 2015) and closer to our commonsense definition of a “rule”. Notably, the partial order renders a tradeoff between the two criteria: r ⇐ r′ implies r is dispensable in anyR ⊇ {r, r′} but on the other hand Ent(r) ≤ Ent(r′), so including morespecific rules makes the rule set small yet each individual rule (informationally) hard.\nThe main problem. The formal definition of an ILL problem is: given a signal ξ : X → R, minimize R⊆Rξ ∆(↑(R), ξ) subject to R is minimal; Ent(r) ≤ for any r ∈ R. (1) The search space involves the full information lattice (Rξ,⇐), or isomorphically, the full partition lattice (PX , ). Yet, the size of this lattice, i.e., the Bell numberBX, scales faster than exponentially in X. It is unrealistic to compute all partitions of X (unless X is tiny), let alone the partial order. Besides computational concerns, there are two reasons to avoid the full lattice (but to leave it implicitly in the background): (a) the full lattice has unnecessarily high resolution, comprising many nearlyidentical partitions particularly when X is large; (b) considering explainability, not every partition has an easytointerpret criterion by which the abstraction is made. As such, Formulation (1) is only conceptual and impractical. Next, we relax it and make it practical via two ILL phases."
},
{
"heading": "3.1 PRACTICAL LATTICE CONSTRUCTION: TO START LIKE A BABY (PHASE I)",
"text": "Information lattice construction plays a role similar to building a function class in ML, sometimes called metalearning. While its importance is commonly understood, the construction phase in many datadriven models is often treated cursorily—using basic templates and/or adhoc priors—leaving most of the computation to the learning phase. In contrast, we put substantial effort into our priordriven construction phase. Pursuing generality and interpretability, we want universal, simple priors that are domainagnostic and close to the innate cognition of a human baby (Marcus, 2018). Here we draw those from Core Knowledge (Spelke & Kinzler, 2007; Chollet, 2019), which include “the (small) natural numbers and elementary arithmetic prior” and “the elementary geometry and topology prior”. We then give algorithms to construct abstractions from these priors, and consider such a construction priorefficient if it is interpretable, expressive, and systematic. In the following flowchart, we summarize information lattice construction as generating a partition sublattice. 20.00 pt\nhPhF,Sii_ hPhF,Sii_^···(PhF,Si, )PhF,Si = P hF i [ PGhSiF, S hF i, GhSi\nseeds (priors)\nfeatures/ symmetries\npartition multiset\npartition poset\npartition semilattice\npartition sublattice\n1 2 4 53\nhierarchy stagepriordriven stage completion stage\n1 2 Feature / Symmetryinduced partitions. Unlike data clustering, our priordriven partitions are induced from two dataindependent sources—features and symmetries. We draw priors—in the form of seed features F and seed transformations S—from Core Knowledge as a basis, and then generate a set of partitions P〈F,S〉 as follows: as an example, for X = R2:\nF = {w[1], w[2], w[1,2], sort, argsort, sum, diff, div2, . . . , div19, mod2, . . . , mod19} (2) S = {horizontal, vertical, diagonal translations} ∪ {rotations} ∪ {reflections} (3)\nΦ〈F 〉 : set of features generated by F via function composition G〈S〉 : set of subgroups generated by subsets of S via subgroup generation PΦ〈F 〉 : set of partitions generated by features in Φ〈F 〉 via preimages PG〈S〉 : set of partitions generated by subgroups in G〈S〉 via orbits\nIn (2), wI denotes coordinate selection (like indexing/slicing in python) and the other functions are defined as in python (div and mod are like in python divmod). Then, P〈F,S〉 = PΦ〈F 〉 ∪ PG〈S〉. 3 Partition poset. We next sort P〈F,S〉, computationally a multiset, into the poset (P〈S,F 〉, ). We import algorithmic skeleton from generic posetsorting algorithms (Caspard et al., 2012; Daskalakis et al., 2011), with an outer routine incrementally adding elements and querying an inner subroutine (an oracle) for pairwise comparison. Yet, our poset is special: its elements are called tagged partitions where a tag records the generating source(s) of its tagged partition, e.g., features and/or symmetries. So, we have specially designed both the outer routine ADD PARTITION and the oracle COMPARE by leveraging (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to predetermine or filter relations. We relegate details to Appendix E. The data structures for posets include po matrix and hasse diagram, encoding the partial order ≺ (ancestors/descendants) and the cover relation ≺c (parents/children), respectively (Garg, 2015). 4 5 Partition semi/sublattice. To complete (P〈F,S〉, ) into a lattice, we compute the sublattice (of PX ) generated by P〈F,S〉. We follow the idea of alternatingjoinandmeet completions borrowed from one of the two generic sublatticecompletion methods (Bertet & Morvan, 1999). A discussion on our choice and other related methods is in Appendix D. However, we implement joinsemilattice completion (meetsemilattice is dual) in our special context of tagged partitions, which echoes what we did in 3 and reuses ADD PARTITION. The adjustments are (a) changing tags from features and symmetries to join formulae and (b) changing the inner subroutine from pairwise comparison to computing join. We then run a sequence of alternating joins and meets to complete the lattice. For interpretability, one may want to stop early in the completion sequence. While a single join or meet remains simple for human interpretation—often understood as the intersection or union of concepts (e.g., the join of colored items and sized items gives items indexed by color and size)—having alternating joins and meets may hinder comprehension. More details on a singlestep joinsemilatticecompletion, the completion sequence, and tips on early stopping are relegated to Appendix E."
},
{
"heading": "3.2 PRACTICAL LATTICE LEARNING: TO LEARN LIKE A CHILD (PHASE II)",
"text": "Learning in an information lattice means solving the optimization Problem (1), i.e., to search for a minimal subset of simple rules from the information lattice of a signal so as to best explain that signal. Let P• be the sublattice (or semilattice, poset, if early stopped) from the construction phase. Projecting a signal ξ : X → R to P• yields the information sublattice R• := ↓ξ (P•) ⊆ Rξ. It is worth reiterating that (a) P• is constructed first and is dataindependent; (b) ξ (data) comes after P•; (c) (R•,⇐) is isomorphic to (P•, ): R• retains the partial order (po matrix and hasse diagram) and interpretability from P•. As such,R• is what is given at the beginning of the learning phase. The main problem (relaxed). For practicality, we relax Problem (1): instead of the full latticeRξ, we restrict the search space toR•; instead of minimal rule sets, we consider only antichains (whose elements are mutually incomparable), necessary for minimality. This yields:\nminimize R⊆R•\n∆(↑(R), ξ) subject to R is an antichain; Ent(r) ≤ for any r ∈ R. (4)\nTo solve Problem (4), we adopt a (greedy) idea similar to principal component analysis (PCA): we first search for the most essential rule—which decreases ∆ most—in explaining the signal, then the second most essential rule in explaining the rest of the signal, and so on. Specifically, we start with an empty rule setR(0) := ∅, and add rules iteratively. LetR(k) be the rule set formed by Iteration (Iter) k andR(k)⇐ := {r ∈ R•  r ⇐ r′ for some r′ ∈ R(k)}. LetR≤ := {r ∈ R•  Ent(r) ≤ }. Then,\n(in Iter k + 1) minimize ∆(↑(R(k) ∪ {r}), ξ) subject to r ∈ R(k)feasible := R≤ −R(k)⇐ . (5) We precomputeR≤ (instead of the wholeR•) before iterations, which can be done by a breadthfirst search (BFS) on P•’s hasse diagram, from bottom (the coarsest) up. As to the monotonicity of Ent w.r.t. the partial order (cf. the grouping axiom of entropy (Cover & Thomas, 2012)), any BFS branch ends once the entropy exceeds . (For later use, we save the setR> of ending rules in BFS, i.e., the lower frontier ofR> .) In contrast,R(k)⇐ is computed per iteration (by querying P•’s po matrix).\nUnder review as a conference paper at ICLR 2021\nmnistt_7_3_solve_alternate_5000\nNested vs. alternating optimization. Computing ↑(R(k)∪{r}) requires solving a minimization, so Problem (5) is a nested optimization: argmin\nr∈R(k)feasible ∆(argminη∈⇑(R(k)∪{r})‖η‖2, ξ). One may\ndenest the two: instead of comparing rules by lifting them up to the signal domain, we compare them “downstairs” on their own rule domains. So, instead of minimizing (5)’s objective, we\nmaximize r ∈ R≤ −R(k)⇐\n∆(↓↑(R(k)) (domain(r)), ↓ξ (domain(r))) = ∆(↓↑(R(k)) (domain(r)), r). (6)\nThe idea is to find the rule domain on which the recovered ↑(R(k)) and the target signal ξ exhibit the largest gap. Adding this rule to the rule set maximally closes the gap in (6), and tends to minimize the original objective in (5). Nicely, in (6) the lifting does not involve r, so (5) is denested, which further iterates into an alternating min max (or lift project) optimization. Let r(k)? be the solution and ∆ (k) ? be the optimal value in Iter k. We updateR(k+1) := R(k) ∪ {r(k+1)? } − {r(k+1)? ’s descendants} (so always an antichain), and proceed to the next iteration. Iterations end whenever the feasible set is empty, or may end early if the rule becomes less essential, measured by ∆(k+1)? −∆(k)?  ≤ γ in the nested setting, and ∆(k)? ≤ γ in the alternating setting (for some γ). The full learning path & complexity. We denote a solve process for Problem (6) by SOLVE( , γ), or SOLVE( ) if γ is fixed ahead. To avoid tuning manually, we solve an path. For 1 < 2 < · · · , assume SOLVE( i) takes Ki iterations, we run the following to solve the main relaxed Problem (6):\n∅ = R(0) → SOLVE( 1)→ R(K1) → SOLVE( 2)→ R(K1+K2) → · · · (7) So, lattice learning boils down to solving a sequence of combinatorial optimizations on the Hasse diagram of a lattice. We walk through the full process (7) via a toy example, starting with a signal ξ : {0, . . . , 27}2 → [0, 1] denoting an image of “7” and a toysized information lattice of the signal (Figure 3A). The sequence of optimizations (7) proceeds at two paces concurrently: the slower pace is indexed by i; the faster pace is indexed by iteration number k. As mentioned earlier, the setsR≤ i\nare precomputed at the slower pace, with the (i+ 1)th BFS initialized fromR> i (the ending rules in the ith BFS). The monotonicity of Ent w.r.t. the partial order assures that these BFSs add up to a single (global) BFS on the entire Hasse diagram, climbing up the lattice from the bottom. This is shown in Figure 3B as the monotonic expansion of the blue region (R≤ ) explored by BFS. Locally at each iteration along the slower pace, solving Problem (6) is quadratic in the worst case when the feasible set is an antichain (i.e., no order), and linear in the best case when the feasible set is a chain (i.e., totally ordered). Since local BFSs add up to a single BFS with a standard linear complexity, the entire learning phase has a total complexity between linear and quadratic in the number of vertices and edges in the whole Hasse diagram. In general, the denser the diagram is, the lower the complexity is. This is because R(k)⇐ tends to be large in this case with more descendants activated (i.e., red in Figure 3B), which in turn effectively shrinks the feasible set (i.e., the blue region minus red). For example, unlike the first three iterations in Figure 3B, the 4th iteration ( = 3) activates more than one rules, including the one being extracted as well as all its unexplored descendants. Further, the upper bound is rarely reached. Unlike in this toy example, BFS in practice is often early stopped when becomes large, i.e., when later rules become more random. Hence, targeting at more deterministic and disentangled rules only, not all vertices and edges are traversed by BFS. In the end of the learning process, for explanatory purposes, we store the entire path and the (R(k))k≥0 sequence instead of just the very last one. This yields a rule trace as the standard ILL output, which we present below.\nHow to read ILL output. ILL outputs a rule trace comprising an evolving sequence of rules, rule sets, and recovered signals (Figure 3C). The three sequences are indexed by iteration and by path, so the rule set by the last iteration under any (starred) is the returned solution to the main Problem (4). We depict a rule by its lifting, since it sketches both the partition and the rule values. Figure 3C gives a full presentation of a rule trace. We also introduce a twoline shorthand (Figure 3D), keeping only the sequence of the recovered signals and that of the rules. A rule trace answers what makes ξ an ξ, or what are the best simple rules explaining ξ. ILL rules are more interpretable than just eyeballing patterns. (a) The interpretability of the trace is manifest in its controllability via , γ: smaller for simpler rules and larger γ for more essential rules. (b) The interpretability of each rule is gained from its partition tag—the criteria by which the abstraction is made. A tag may contain several generating sources as different interpretations of the same rule abstraction. Like different proofs of a theorem, a partition tag with multiple sources reveals equivalent characterizations of a structure and thus, more insights of the signal. So, tags are not only computationally beneficial in constructing lattices, but also key to interpretation. We present indepth analyses on tags in the applications below."
},
{
"heading": "4 ILL EXAMPLES",
"text": "We show typical ILL examples on knowledge discovery in art and science: learning music theory from scores and chemical laws from compounds (while relegating more analyses on handwritten digits to Appendix F). For both, we fix the same priors—F, S in (2)(3)—thus the same lattice. We fix the same parameters: path is 0.2 < 3.2 < 6.2 < · · · (tip: a small offset at the beginning, e.g., 0.2, is used to get nearlydeterministic rules) and γ is 20% of the initial signal gap. This fixed setting is used to show generality and for comparison. Yet, the parameters can be fine tuned in practice.\nMusic illustration. Signals are probability distributions of chords encoded as vectors of MIDI keys. Figure 4a) shows such a signal—the frequency distribution of twonote chords extracted from the soprano and bass parts of Bach’s Cscore chorales (Illiac Software, Inc., 2020)—with the learned rule trace listed below. The first rule is tagged by argsort ◦w[1,2] and has probability all concentrated in one cell whose elements have a larger ycoordinate (the black region above the diagonal). So, this is a deterministic rule, echoing the law of “no voice crossing (N.V.C.)”, i.e., soprano higher than bass. Checking later rule tags finds laws of voice range (V.R.), diatonic scale (D.S.), and consonant interval (C.I.)—almost all of the main static rules on twovoice counterpoint. Notably, the third rule is tagged by both mod12 ◦ w[1] and vertical translation invariance. From both feature and symmetry views, this tag identifies the concept of all Cs, all Ds, etc., which is the music concept of pitch class. The feature view explicitly reveals a period of 12 in pitches—the notion of an octave (in defining pitch class); the symmetry view reveals the topology—the manifold where the concepts lie—in this case a 2D torus.\nChemistry illustration. Signals are booleanvalued functions indicating the presence of compound formulae encoded as vectors of atomic numbers in a molecule database. Figure 4b) shows a signal attained by collecting twoelement compounds from the Materials Project database (Jain et al., 2013) of common compounds. The first rule tagged by div18 ◦w[2] is deterministic: Element 2 can never be\nAr, K, Ca. It nicely captures the visual pattern in Figure 4b) (the last three vacant columns) and hints suggestively at some chemistry rules. The second rule tagged by mod8 ◦w[2] has peaks at cells tagged by feature values 1, 7, 0, 6. These cells, for Element 2, are halogens (+H), pnictogens, chalcogens, crystallogens. The third rule shows alkali metals, alkaline earth metals, crystallogens, icosagens are the cells common for Element 1. Next rule shows the common combinations, e.g., alkali metals and halogens. Note that the 2nd, 3rd, 4th rules for chemistry and the 5th, 3rd, 4th rules for music share the same tags, except that mod12 becomes mod8—period changes from 12 (a music octave) to 8 (number of main groups). So, when two chemical elements form a compound, they are like two music notes forming a chord! The music concepts of pitch classes and intervals parallel the chemical concepts of groups and their distances. Although abstractions are shared, rules differ. Instead of a diatonic scale in Bach’s chorales, chemistry uses a “cation scale” and an “anion scale”. It is interesting that our intention to show ILL’s generality (same lattice, parameters for different subjects) also suggests links between art and science by interpreting phenomena (signals) in one subject from the perspective of the other (Bodurow, 2018). Applications that extend the experiment here beyond a clustering model to restore the periodic table (Zhou et al., 2018) and render complex molecules in high dimensions are ongoing, aiming to discover new laws, new interpretations of existing laws, and new materials.\nRealworld deployment & evaluation. We generalized the music illustration to a real app of an automatic music theorist (Yu et al., 2016; Yu & Varshney, 2017). It specially implements the alternating min max setting into a “student teacher” model: the student is a (music) generator and the teacher is a discriminator. The two form a loop where the teacher guides the student towards a target style through iterative feedback (extracting rules) and exercise (applying rules). This app extends the above music illustration considerably. It considers more music voices, so now signals are in higher dimensions and rules are on more complex chord structure. It considers temporal structure, so now signals include many (un)conditional chord distributions (multingrams), yielding both contextfree and contextdependent rules, but new challenges too, namely rare contexts and contradictory rules. ILL’s core idea of abstraction makes “small data large” thus, rare contexts common (Yu & Varshney, 2017), and a redesigned lifting operator solves contradiction (Yu et al., 2017). Further, parameters like , γ are made into selfexplanatory knobs for users to personalize their learning pace.\nWe conducted two studies to assess rulelearning capability and interpretability. We present the main results here and detail the procedures in Appendix G. In the first study, we compared ILLdiscovered rules with humancodified domain knowledge to see how much known can be reproduced and how much new can be discovered. Trained on just 370 Bach’s chorales, our model reproduced in explicit\nUnder review as a conference paper at ICLR 2021\na.\ncovered 66%\nhinted 26%\nmissed 7%\nhow much known?\nUnder review as a conference paper at ICLR 2021 the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the twoweek period, we asked the students to complete it independently, with no group work or office hours.\nAssess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubricgenerating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 5. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the ngram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the ngram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AIproduced knowledge itself.\nTable 5: Students’ final scores.\nScore Range # of Students 50 3\n[40,50) 7 [30,40) 2 [20,30) 4 [10,20) 1 [0,10) 1\n0 5\nH CONCLUSION AND BROADER IMPACTS\nModel transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two Cultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a humancentered standpoint and our longterm pursuit of “getting humanity back into artificial intelligence”. We strive to develop humanlike artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).\nAs such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g.,\n23\nb. how interpretable? c. figured soprano\nentropy = 4.76 figured alto entropy = 4.78\nfigured tenor entropy = 4.80 figured bass entropy = 4.34\nhow much new?\nFigure 5: ILL assessments on knowledge discovery tasks.\nforms 66% of a standard music theory curriculum (Figure 5A). In the rest, about 26% (e.g., harmonic functions and music forms) wa implicitly hi ted at by the cur ent ngram based model, modeling only transitions of abstractions but not explicitly abstractions of transitions—a future direction. In the second study, we ran a humansubject experiment in the form of homework for a music class. The homework asked 23 students o write verbal interpretations of ILLgenerated rules rendered as histograms over tagged partitions. Grading was based on a rubric of keywords generated via majority vote in a later discussion among students and teachers. Figure 5B shows that the majority (2/3) of the students who did the homework succeeded (w.r.t. the 30/50 passing grade) in the interpretation task, which in turn shows the interpretability of the AIproduced knowledge itself.\nIn the first study, our model also discovered new rules that interested our colleagues in the music school. (a) Tritone resolution is crucial in tonal music, yet in Bach’s chorales, tritones sometimes do not resolve in typical ways, but consistently transition to other dissonances like a minor seventh. (b) A new notion of “the interval of intervals” was consistently extracted in several rule traces. This “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure heretofore unconsidered. (c) New symmetry patterns reveal new harmonic foundations. As a parallel concept of harmony traditionally built on figured bass (dominant in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 5C). Although not the best view for explaining Bach according to ILL and is not included in any standard music theory class, it may be a valuable perspective for music starting deviating from classical. This was confirmed by domain experts (Sokol, 2016), with more details in the end of Appendix G.1."
},
{
"heading": "5 DISCUSSION: LIMITATIONS AND CHALLENGES",
"text": "As a first step, we devise a new representationlearning model intended to be both theoretically sound and intrinsically interpretable. This paper shows typical setups and applications, but ILL is a general framework that admits new designs of its components, e.g., projectionandlifting or priors. Notably, designing a lattice not only sets the rulelearning capacity but also the “vocabulary” for interpretation which, like the SapirWhorf hypothesis for human language, limits how a lattice explains signals. Likewise, priors have pros and cons based on what we seek to explain and to whom (e.g., not all signals are best explained by symmetry, nor can everyone reads symmetry equally well). One solution is to explore multiple lattices while balancing expressiveness and computation—a common practice in picking ML models too. Further, whether a signal is indeed governed by simple rules requires rethinking. Sometimes, no rules exist, then ILL will indicate this and a casebycase study will be needed. Sometimes, rules are insufficient: is music in fact governed by music theory? Theory is better viewed as necessary but not sufficient for good music: great composers need not be great theorists.\nFollowing studies comparing humancodified knowledge and using humansubject experiments for interpretability, more systematic ILL benchmarking and assessment remain challenging and need longterm efforts. Benchmarking is not as easy as for taskspecific settings (Chollet, 2019), requiring better comparison schemes or a downstream task. Effective ILL assessments must focus on new discoveries and the ability to assist people. Instead of a Turing test for machinegenerated music, one may (at a metalevel) consider tests between independent and machineaided compositions, but both are done by humans. Further, ILL may be incorporated with other models, having an ILL version of deep learning or vice versa. For example, using ILL as a preprocessing or postinterpretation module in other models to achieve superior task performance as well as controllability and interpretability. One other possibility may use ILL to analyze attention matrices (as signals) learned from BERT or GPT (Rogers et al., 2020). More future visions are in Appendix H."
},
{
"heading": "A CONNECTION TO CONCEPT LATTICE",
"text": "Per our definition, a concept refers to a component of an abstraction, or more precisely, is a cell in a partition or an equivalence class under an equivalence relation. This definition is consistent with a formal concept defined in formal concept analysis (FCA) (Ganter & Wille, 2012; Ganter et al., 2016; Priss, 2006) as a set of objects (extent) sharing a set of attributes (intent), which can be also treated as objects that are equivalent under the attributes. However, our definition of a concept generalizes that of a formal concept in two ways. First, in our case, a partition or an equivalence relation is not induced from domainspecific attributes through formal logic and formal ontology, but from universal priors drawn from the Core Knowledge (detailed in Section 3.1 in the main paper). Second, specifying a partition considers all of its concepts, whereas specifying a set of formal concepts only considers those with respect to a given formal context. As a result, partition lattices in our case generalize concept lattices in FCA, and are not generated, hence not constrained, by domain knowledge such as those encoded in formal ontologies.\nMathematically, let (PX , ) be the partition lattice comprising all partitions of X and (2X ,⊆) be the subset lattice comprising all subsets of X . Clearly, the power set 2X is the same as {C ∈ P  P ∈ PX}. That is, the subset lattice is also the lattice comprising all concepts from all partitions of X , which can be then called the full concept lattice. So, one can define any concept lattice in FCA as a sublattice of the full concept lattice (cf. Definition 3 in (Ganter et al., 2016)). Yet, such a concept sublattice does not have to include all concepts from a partition, and in many cases, it tends to miss many concepts if they are not known in the existing ontology. We give two examples below to further illustrate the connection between a partition lattice and a concept lattice.\nFirst, consider biological taxonomy. Dogs and cats are two concepts in species which is an abstraction containing other concepts such as eagles. Likewise, mammals and birds are two concepts in class which is an abstraction containing other concepts such as reptiles and insects; further, animals and plants are two concepts in kingdom. In light of hierarchy, as abstractions, species class kingdom (in a partition lattice); as concepts, dogs ⊆ mammals ⊆ animals (in a concept lattice). Note that when forming a concept lattice, one may not need to include say, all species. Yet when having species as an abstraction in a partition lattice, this abstraction must contain all species including known species and unknowns, where the latter is usually of more interest for knowledge discovery.\nSecond, consider music theory. C major triads, C minor triads, and B diminished triads are concepts in an abstraction induced by music octaveshift and permutation invariance. Further, major triads, minor triads, and diminished triads are concepts in another abstraction induced by music octaveshift, permutation, and further transposition invariance. Clearly, for abstractions, the former abstraction is finer than the latter; for concepts, the set of C major triads is a subset (or a special case) of the set of major triads. However, chords that are not defined in traditional music theory but appear as new concepts in a known abstraction (e.g., the two above) may be more interesting, since they may suggest new composition possibilities while still obeying the same music abstraction, in this case the same music symmetry. New concepts from new abstractions may push the composition boundary even further, suggesting new types of chords discovered from e.g., new symmetry (but possibly within a known symmetry family). See the end of Appendix G.1 for more examples from new discoveries."
},
{
"heading": "B MORE GENERALIZED FORMALISM FOR INFORMATION LATTICE",
"text": "The mathematical setting in the main paper is for a nonnegative signal on a finite domain. However, this is not a limitation, but purely for notational brevity and computational reasons. First, regarding nonnegativity, in many real scenarios, the signal is bounded and its value is only relative. In these cases, one can simply add an offset to the signal to make it nonnegative. More generally, we can\nconsider a signal to be any measurable function ξ : X → Rn. Then the notions of an abstraction, a concept, a rule, as well as the partial order can be generalized as in Table 1. Hence, the notion of an information lattice is still welldefined in the generalized setting. The essence of the two settings lies in how we formalize an abstraction, whether using a partition or a σalgebra. However, the two are not very different from each other: any partition of X generates a σalgebra on X , and any σalgebra on a countable X is uniquely generated by a partition of X (Çınlar, 2011).\nFurther, the main paper uses the summation functional in defining a rule of a signal, or the projection operator. However, other options are possible, e.g., mean, max, min, or a specially designed functional. The lifting operator can then be redesigned accordingly. In particular, besides always favoring the most uniform signal, the design of the special lifting can have extra freedom in considering other criteria for picking a signal from the general lifting."
},
{
"heading": "C MORE INSIGHTS ON THE SPECIAL LIFTING",
"text": "Consider the special lifting ↑(R) for any rule setR = ↓ξ (P) of a given signal ξ. Computing ↑(R) is simple ifR = {r} contains only a single rule. In this case, ↑(R)(x) = ↑(r)(x) := r(C)/C for any x ∈ C ∈ domain(r), which requires simply averaging within each cell. However, computing ↑ (R) becomes much less trivial when R > 1. By definition, we need to solve the minimization problem:\n↑(R) := argminr∈⇑(R)‖r‖2. (8)\nInstead of directly throwing the above problem (8) into a generic optimization solver, there is a more efficient approach which also reveals more insights on the special lifting. More specifically, one can check that any multirule lifting ↑(R) can be computed as a singlerule lifting ↑(r?) where the single rule r? is defined on the join ∨P and is computed as follows:\nr? := argminr∈⇑(∨P)(R)‖r̃‖2, with the weighted norm ‖r̃‖2 := √∑\nC\nr(C)2\nC . (9)\nSo, instead of liftingR directly to the signal domain X , we liftR to the join ∨P first and then to X . Since  ∨P ≤ X, the minimization problem (9) is in a smaller dimension compared to the original problem (8), and thus, can be solved more efficiently. In the minimization problem (9), by definition, ⇑(∨P) (R) := {r : ∨P → R  ↓r (P) = R}. Hence, every rule r ∈ ⇑(∨P) (R) can be treated as a singlerule summary of the rule setR, and r? is one of them—the one that yields the most uniform signal. Realizing the special lifting R → ↑ (R) as the twostep lifting R → r? → ↑ (r?) = ↑ (R) reveals the following insight: given rules abstracting ξ at different levels (coarser or finer), the best one can hope to faithfully explain ξ is at the level of the join. Determining ξ at any level finer than the join would then require additional assumptions other than the rule set itself, such as the preference of uniformity used here. This further explains the two sources of information loss (join and uniformity) discussed in the recovery process of a signal (cf. Section 3 in the main paper). Notably, to determine a signal even at the level of join may be ambigious, since the general lifting ⇑(∨P) (R) to the join is not necessarily a singleton. This particularly implies that r? as one of the singlerule summaries ofR of ξ is not necessarily a rule of ξ, i.e., there is no guarantee that r? = ↓ξ (∨P). To make it so, we need more rules."
},
{
"heading": "D EXISTING WORK ON SUBLATTICE GENERATION",
"text": "General methods for computing the sublattice LB of a full lattice L generated by a subset B ⊆ L fall into two basic families, depending on whether the full lattice needs to be computed. The first uses alternating join and meetcompletions, with worsecase complexityO(2B); the second characterizes the elements of L that belong to the sublattice, with complexity O(min(J(L), M(L))2L) where J(L) and M(L) denote the number of joinirreducibles and meetirreducibles, respectively (Bertet & Morvan, 1999). The latter requires computing the full lattice, which is intractable in our case of partition lattices, as L = PX  grows faster than exponentially in X whereas P〈F,S〉 is usually smaller than X. So, we use the first approach and compute alternating join and meetcompletions. The same principle of avoiding computing the full lattice has been applied to the special context of concept lattices (Kauer & Krupka, 2015), yet the technique there still requires the full formal context corresponding to the full concept lattice. Note that sublattice completion is, by definition, computing the smallest sublattice LB (in a full lattice L) containing the input subset B ⊆ L, where LB must inherit the meet and join operations from L. It generalizes but is not the same as DedekindMacNeille completion (Bertet & Morvan, 1999; MacNeille, 1937; Bertet et al., 1997)."
},
{
"heading": "E MORE DETAILS ON THE CONSTRUCTION PHASE",
"text": "This section elaborates on the second half of Section 3.1 in the main paper, presenting more algorithmic details on poset construction and sublattice completion. The core data structures for posets are the socalled adjacency matrix and Hasse diagram, encoding the partial order ≺ and the cover relation ≺c, respectively (Garg, 2015). The former is best for querying ancestors and descendants of a partition within the lattice; the latter is best for querying parents and children of a partition. (A more advanced technique includes chaindecomposition, but the two here are sufficient for this paper.) More specifically,\nP ′ is an ancestor of P ⇐⇒ P ≺ P ′\nP ′ is a parent of P ⇐⇒ P ≺c P ′ (i.e., P ≺ P ′ but no P ′′ satisfies P ≺ P ′′ ≺ P ′). We introduce a few algorithmic notations. Given a partition poset (P, ), we use P.po matrix and P.hasse diagram to denote the adjacency matrix and Hasse diagram of P, respectively. For any partition P ∈ P, we use P.ancestors, P.descendants, P.parents, and P.children to denote the sets of ancestors, descendants, parents, and children of P , respectively. Notably, the two data structures are not only important for the construction phase but for the subsequent learning phase as well. The core subroutine in the construction phase is ADD PARTITION sketched as Algorithm 1. It is the key unit step in both poset construction and (join)semilattice completion.\nPoset construction. This corresponds to Step 3 in the flowchart in Section 3.1 of the main paper. Recall that poset construction refers to the process of sorting a multiset P〈F,S〉 of tagged partitions into a poset (P〈F,S〉, ), where the partition tags are features and symmetries. Naively, if we write an inner subroutine COMPARE(P,P ′)—called an oracle in the related literature—to compare two partitions, sorting a multiset into a poset amounts to ( N 2 ) calls of this pairwise comparison where N is the size of the input multiset. So, the common idea shared in almost all poset sorting algorithms is to reduce the number of oracle calls as much as possible. As mentioned in the main paper, considering the additional properties in our case, we leverage (a) transitivity (valid for all posets), (b) partition size (valid for partitions), and (c) partition tag (valid for tagged partitions) to predetermine or prefilter relations. In other words, we want to infer from the context as many pairwise relations as possible, so that the number of actual pairwise comparisons can be minimized.\nMore specifically, we start from an empty poset, and call ADD PARTITION to incrementally add partitions from the input multiset to the poset. As the outer subroutine, ADD PARTITION leverages transitivity and partition size by maintaining three live data structures, namely size2partns, po matrix, and hasse diagram, so as to avoid calling COMPARE whenever possible. Consequently, COMPARE is called only at two places (underlined in Algorithm 1): one for = and one for ≺. When called as the inner subroutine, COMPARE(P,P ′) does not always perform an actual computation for pairwise comparison. Instead, it first checks if the tags are informative (e.g., compositions/supergroups imply coarser partitions) and only if not, makes an actual comparison. With the additional information from partition size, an actual comparison can be done in O(X) time\nAlgorithm 1: ADD PARTITION (Pτ ,P): adds a tagged partition Pτ to a partition poset (P, ) Input: a tagged partition Pτ , where the tag τ can be a feature/symmetry or a join/meet formula;\na partition poset (P, ), with the following members and hash tables: · every P ∈ P is a unique partition (indexed by a unique identifier) · P.partn2tags[P] := {τ  Pτ = P} denotes the set of all tags inducing P · P.size2partns[k] := {P  P = k} denotes the set of all P ∈ P with size k · P.po matrix encodes the partial order ≺, best for getting P.ancestors/descendants · P.hasse diagram encodes the cover relation ≺c, best for getting P.parents/children\nStep 1: determine if Pτ is new by COMPARE(P,Pτ ) (for =) for every P ∈ P.size2partns[Pτ ]\nif Pτ ∈ P.size2partns[Pτ ]: update P.partn2tags[Pτ] by adding τ ; return else: create a new hash entry P.partn2tags[Pτ] = {τ}; proceed to Step 2\nStep 2: add the new partition Pτ to P (2a) update P.size2partns[Pτ ] by adding Pτ (2b) update P.po matrix and P.hasse diagram\n– for every existing size k < Pτ  sorted in a descending order: for every P ∈ P.size2partns[k]:\nif P.parents ∩ Pτ .descendants 6= ∅: update P.po matrix by adding P ≺ Pτ else: COMPARE(P,Pτ ); update P.po matrix and P.hasse diagram if P ≺ Pτ\n(here one can check: it is necessarily the case that P ≺c Pτ ) – do the above symmetrically for every existing size k > Pτ  sorted in an ascending order – (note: every P ∈ P.size2partns[k] for k = Pτ  is incomparable with Pτ ) – clean cover relation: remove any P∗ ≺c P∗ from P.hasse diagram if P∗ ≺c Pτ ≺c P∗\nvia a mapping process. More specifically, given two partitions P,P ′, without loss of generality, we assume P ≤ P ′. An actual comparison is made by tentatively creating a mapping ν : P ′ → P . One can check that such a ν exists if and only if P P ′. Hence, if P = P ′ (resp. P < P ′), one can determine = (resp.≺) if ν is created successfully or incomparability otherwise. The mapping complexity is linear in X, with linear coefficient 1 if mapping succeeds and with linear coefficient < 1 if mapping fails. In the worst case (e.g., if all partitions are incomparable), all ( N 2 ) pairwise comparisons are required. Our algorithm works best when partitions are richly related (i.e., the Hasse diagram is dense), which is indeed the case for our tagged partitions induced from systematically formed features and symmetries.\nSemilattice completion. This corresponds to Step 4 in the flowchart in Section 3.1 of the main paper. Recall that joinsemilattice completion refers to the process of completing a partition poset into a semilattice. We only detail joinsemilattice completion, since meetsemilattice completion can be done symmetrically. Formally, we want to compute the joinsemilattice of PX generated by the input poset (P〈F,S〉, ). We denote the resulting joinsemilattice by 〈P〈F,S〉〉∨. By definition,\n〈P〈F,S〉〉∨ := {∨P  P ⊆ P〈F,S〉}. Naively, if computing 〈P〈F,S〉〉∨ literally from the above definition, one has to iterate over all subsets of P〈F,S〉 and compute their joins. This amounts to 2N join computations where N = P〈F,S〉 is the size of the input poset, and moreover, many of the joins are not pairwise. Yet, similar to our earlier poset construction, we may reduce the computations of joins by an incremental method, which also embeds ADD PARTITION as a subroutine and utilizes partition sizes and tags, but now the tags are join formulae instead of features or symmetries.\nMore specifically, we start with an empty semilattice P, and add partitions in P〈F,S〉 to P one by one from smallersized to largersized (note: the size information is maintained in P〈F,S〉.size2partns). When a partition P ∈ P〈F,S〉 is to be added, we make a tag named by itself, i.e., let Pτ := P with τ := {P}, and then call ADD PARTITION(Pτ ,P). There are two possibilities here: Pτ already exists in P (call ends by Step 1) or Pτ is new (call ends by Step 2). In the former, we are done with Pτ .\nIn the latter, for every P ′ ∈ P\\{Pτ}, compute the pairwise join J (P ′) := ∨{Pτ ,P ′} and its tags T (P ′) := {τ ∪ τ ′  τ ′ ∈ P.partn2tags[P ′]}, and call ADD PARTITION(J (P ′)T (P′),P). Like COMPARE, computing join can be optimized by leveraging previously computed tags and partial order in the input poset P〈F,S〉, so as to avoid an actual join computation whenever possible. When inferring from the context is not possible, one can perform an actual join computation ∨(P,P ′) in O(X) time. This is done by collecting the unique pairs of cell IDs (C(x), C ′(x)) for every x ∈ X , where C(x) and C ′(x) denote the cell IDs of x in P and P ′, respectively. In the worst case (e.g., if all partitions are incomparable and joinirreducible), the complexity is inevitably O(2N ). However, like in poset construction, our algorithm works best when the partial order structure is rich.\nPractical tips for sublattice completion. This corresponds to Step 5 in the flowchart in Section 3.1 of the main paper. Recall that constructing the sublattice of PX generated by P〈S,F 〉 follows the alternating process: L0 := P〈S,F 〉, L1 := 〈L0〉∨, L2 := 〈L1〉∧, L3 := 〈L2〉∨, and so forth, which terminates as soon as Lk−1 = Lk. We denote the end result by 〈P〈S,F 〉〉∨∧···, which is the desired sublattice. However, we may want to stop early in the completion sequence, due to concerns from computation, interpretability, expressiveness, as well as their tradeoffs. We suggest a practical tip on deciding where to stop. If the input poset P〈F,S〉 is small, run alternating joins and meets, or even complete it to the sublattice if affordable. If P〈F,S〉 is moderate, complete the joins only (as join is closely related to rule lifting, see Appdenix C for more details). If P〈F,S〉 is large, just use it."
},
{
"heading": "F MORE ANALYSES IN THE LEARNING PHASE",
"text": "This section elaborates on the last paragraph of Section 3.2 in the main paper, presenting more analyses and interpretations on the rule traces elicited from the toy handwrittendigit examples. Yet, as mentioned in the main paper, computer vision is currently not among the typical use cases of ILL. Learning rules of handwritten digits may not be of much independent interest unless for calligraphy. So, the analyses and interpretations here are for illustration purposes only. We refer readers to the Broader Impact section in the main paper for possible future directions on how ILL may be used, together with other ML models, to solve computer vision tasks.\nRecall that the main use case of ILL is to explain a signal ξ, answering what makes ξ an ξ. The same toy example illustrating an ILL process is replayed here in Figure 3. The signal ξ : {0, . . . , 27}2 → [0, 1] is a grayscale image of a handwritten “7”. In this case, a rule of ξ, or the projection of ξ to a partition of {0, . . . , 27}2, can be viewed as gathering “ink” within each partition cell. Accordingly, the (special) lifting can be viewed as redistributing the gathered “ink” (evenly) in each cell. Hence, we term this view the ink model. For visual convenience, we depict a rule of a 2D signal by its lifting (i.e., another grayscale image), since with pixels in the same cell colored the same, we can use the lifting to sketch both the partition and the rule values. More precisely, when a lifting represents a rule, it must be viewed in terms of blocks or superpixels; whereas a real lifting (i.e., a signal or a real image) is viewed normally by the regular pixels. To better clarify, all rules in Figure 3 are displayed in red boxes, whereas all liftings are in green ones.\nFor a simple illustration, we draw a small number of features and symmetries to generate a poset (P•) of 21 partitions. The corresponding part of the information lattice (R•) is shown by its Hasse diagram in Figure 3. Further, on top of the Hasse diagram, we demarcate the frontiers of the sublevel sets (R≤ ) by six blue dashed curves. Note that in this tiny diagram, we have sketched a full range of sublevel sets, yet for large diagrams, sublevel sets are constructed for small values only in a singlepass BFS. The right part of Figure 3 illustrates a complete ILL process in the alternating setting, with lift project signified by the green uparrows and red downarrows, respectively. During the learning process, ILL tries to minimize the gap in the signal domain (upstairs) through iterative eliminations of the largest gap in the rule domain (downstairs). Eliminating a larger rule gap tends to imply a larger drop in the signal gap, but not necessarily in every iteration, since the special lifting may accidentally recover a better signal if the assumed uniformity is, by chance, present in the signal. The rule setR(k) formed per iteration is presented in the middle of the right part of Figure 3, which joinly shows the complete rule trace continuously progressing along the path.\nThe rule set in the last iteration under any (marked by ? in Figure 3) is the returned solution to the main relaxed Problem (4) in the main paper. This rule set is used to answer what makes ξ an ξ. For example, let rj denote the rule with ID j (here a rule ID is the same as the partition ID, the unique identifier used in Algorithm 1 during the construction phase). Then, among all rules whose entropies\nare no larger than = 2, the third rule set in the traceR(3) = {r9, r1, r18} best explains what makes ξ an ξ. However, if more complex rules are allowed, say if all rule entropies are now capped by = 6, R(7) = {r13, r15, r19} is the best. Recall that we do not just eyeball the rules to get intuitive understandings. Every rule is the projection of the signal to a tagged partition, where the tag, generated in a priordriven way, explicitly explains the underlying abstraction criteria. For example, r19 in Figure 3 comes from a symmetry tag representing a permutation invariance, which visually renders as a reflection invariance. Rules r8 and r9 come from two feature tags div7 ◦ w[1] and div7 ◦ w[2], respectively. These two feature tags represent the continuous and even collapsing in the first and the second coordinate, respectively, which visually render as horizontal and vertical strips in either case. Both rules are later absorbed into r13 tagged by div7 ◦w[1,2], since its rule domain is strictly finer. These rules (r8, r9, r13) apparently summarize the horizontal and vertical parts of the handwritten “7”. Further, the vertical part of the “7” is longer and slants more, so we see more verticallypatterned rules in the rule trace (r9, r11, r15). These rules are obtained from finer and finer abstractions along the horizontal direction, so as to capture more details on the vertical part of that “7” such as its slope. Notably, among these verticallypatterned rules, r11 is induced from the symmetry representing a horizontal translation invariance, but it is quickly absorbed into r15 whose entropy is not much higher. This transient appearance of r11 implies that it plays a less important role in explaining this handwritten “7”. In fact, from more experiments, symmetries in general play a less important role in explaining many “7”s. This is, however, not the case in explaining many “8”s, where symmetries occur much more often. For example, consider a symmetry fused from translation and permutation invariances whose fundamental domain is homeomorphic to a Möbius strip. We hypothesize that this topological property might be related to the twisted nature of an “8”. For a visual comparison, we present the rule traces learned from a “7” and an “8” below in Figure 6, as well as the visual similarity between a Möbius strip and an “8”. mnistt_7_3_solve_alternate_5000\nmnistc_8_2_solve_alternate_5000"
},
{
"heading": "G STUDIES ON ILLBASED MUSIC APPLICATION",
"text": "We introduce two tests associated with a realworld application. The first is to assess rulelearning efficacy, where we compare machinediscovered rules to humancodified domain knowledge. The second is to assess humaninterpretability, where we use human subject experiments on interpreting machinegenerated rules.\nThe application here is our first step towards building an automatic music theorist and pedagogue, which is to be deployed as an assistant in music research and education. The two tests are our initial effort towards a systematic benchmarking and assessment platform. In the continuing effort of bridging human and machine intelligence, new standards are to be set and commonly agreed upon, so as to reasonably compare machinecodified discoveries with humancodified knowledge, as well as to use humansubject experiments for assessing interpretability. Fully developing assessment protocols is a challenging, longterm endeavor. Here, we use the two tests as starting points, and present results from each. Respectively, the first experiment tests music rule discovery, a basic requirement to be a theorist; the second tests interpretability, a basic requirement to be a pedagogue.\nTo conduct the two tests, we first build a userfriendly web application, which is used to better see and control the ILL learning process and results. Figure 7 illustrates the web interface. Users learn music rules—each as a histogram over a tagged partition (i.e., machinecodified music concepts)—and control their learning pace via selfexplanatory knobs whose set values are automatically converted to internal parameters (e.g., , γ). One critical musicspecific extension to the vanilla ILL presented in the main paper is adding a temporal component, since music is highly contextual. This amounts to considering more than one signal simultaneously, which include various (un)conditional chord distributions (multiple ngrams with varying n’s and varying conditionals) encoding information of individual chords as well as melodic and harmonic progressions. Accordingly, ILL produces both contextfree and contextdependent rules, each of which is indexed by a partition and a conditional under that partition. For example, given the partition that is equivalent to classifying music chords into roman numerals and conditioned on the previous two chords being a I64 followed by a V, a rule specifies the probability distribution of the next roman numeral, and in this case reproduces the music rule on Cadential64. Note that in a contextdependent rule, not only is the query chord abstracted, but also the conditional. This is in contrast with many classical ngram models where no abstraction is present and thus may suffer from the problem of rare contexts, where a conditional occurs very few or even zero times in the training set. However here, the core idea of abstraction makes “small data” large and thus rare contexts common. More examples of contextfree and contextdependent rules are illustrated as histograms in Figure 8. These rule histograms are generated from ILL based on 370 of Bach’s fourpart chorales (in the format of digital sheet music), and are used in the two experiments detailed below.\nG.1 COMPARISON TO HUMANCODIFIED KNOWLEDGE\nWe compare rules learned from ILL to a standard undergraduate music theory curriculum. We want to use known laws from music theory as a benchmark to see how ILLgenerated rules correspond to humancodified music knowledge. In particular, we want to see what is covered, what is new, and what is different. Yet, the ultimate goal is not just to use known music theory as a ground truth for the purpose of driving ILL to fully reconstruct what we know, but eventually to discover new rules,\nto gain new understandings of existing rules, to suggest new composition possibilities, as well as to teach rules in a personalized way.\nA priori we are aware of three major differences between humancodified music theory and ILLgenerated rules. (a) In light of music raw representations (input), laws of music theory are derived from all aspects in sheet music whereas ILLgenerated rules are currently derived from only MIDI pitches and their durations. This is because we currently study ILL as a general framework. When a musicspecific application is to be developed later, one can include more music raw representations such as letter pitches, meter, measure, beaming, and articulations. (b) In light of rule format (output), laws of music theory and ILLgenerated rules have two different styles, with the former being more descriptive and absolute (hard), whereas the latter being more numerical and probabilistic (soft). For instance, a music rule that completely forbids consecutive fifths is reproduced by an ILLgenerated rule that assigns a small nonzero probability to the event. Therefore, while it is possible to “translate”, with information loss, a (precise) ILLgenerated rule to a (verbal) rule in known theory, it may not make sense to “translate” in the opposite direction. Also, it is not a good idea to hardcode known rules as categorical labels in a supervised setting, since music rules are inherently flexible and hardcoding may lead to a rulebased AI that generates somewhat “mechanical” music such as the Illiac Suite (Hiller & Isaacson, 1957). (c) In light of purposes, laws of music theory are more intended for general pedagogical purposes, rather than to reflect the style of a particular data set. For instance, while consecutive fifths are banned in homework and exams, they may be widely used in many pop songs. Even in our data set of Bach’s chorales (which are supposed to follow the known rules quite well), we see Bach himself wrote a handful of consecutive perfect intervals. On the contrary, ILLgenerated rules are specific to the input data set. We may certainly find some data sets that follow the known rules quite well (e.g., Bach’s chorales), but also others that break many known rules and even set their own rules.\nKeeping these three differences in mind and by further isolating them from the comparison results, we can reveal the remaining differences that are due to the rulelearning process itself. To come up with the benchmark, we compiled a comprehensive syllabus of laws from music theory taught in our music school’s theory review course, which runs through the full series of theory classes at a fast pace. This humancodified music knowledge is organized as a running list of 75 topics and subtopics indexed by lecture number. On the other hand, ILLgenerated rules are indexed by partition (ID) and ngram (n). The results are summarized below in Table 2, where the colored crosses in the last column indicate topics that are missed by ILL due to different reasons.\nAmong the total 75 topics in Table 2, we first ignore 7 of them (red crosses) which require music raw representations beyond MIDI pitches and durations (e.g., accents and enharmonic respellings of some augmented sixth chords). ILL covered 45 out of the remaining 68 topics, yielding a coverage of 66%. Among the 23 missed topics, 18 (blue crosses) are related to deeperlevel temporal abstractions such as harmonic functions, key areas, and forms. These temporal abstractions may be better modeled as abstractions of transitions, which are implicitly captured but not explicitly recovered from our current multiabstraction multingram language model, modeling only transitions of abstractions. The other 5 missed topics (black crosses) are tricky and require adhoc encodings, which are not explicitly learnable (but may be implicitly captured to some extent) from our current ILL implementation. Accordingly, the composition of the 30 = 7 + 18 + 5 uncovered topics suggest three future directions to possibly raise the rulelearning capacity of the current implementation: (a) include more music raw representations; (b) model abstractions of transitions; (c) either make musicspecific adjustments when developing music apps or figure out a more expressive and more general framework in the long run. However, remember that the goal here is not to reproduce what we know but to augment it. So, we may certainly stop after enabling abstractions of transitions, which in the best case can yield an improved coverage of 84% (i.e., 93% of the topics from MIDI notes only) which is good enough.\nLecture Music Theory Partition IDs ngram\n1 music accents 7 2 pitch 14 1 3 2 pitch class 1619 1 3 2 interval 3136 1 3\nTable 2 (cont.)\nLecture Music Theory Partition IDs ngram\n2 interval class 97102 1 3 3 stepwise melodic motion (counterpoint) 14 2 3 3 consonant harmonic intervals (counterpoint) 97102 1 3 3 beginning scale degree (counterpoint) 1619 2 3 3 ending scale degree (counterpoint) 1619 2 3 3 beginning interval class (counterpoint) 97102 2 3 3 ending interval class (counterpoint) 97102 2 3 3 parallel perfect intervals (counterpoint) 97102 2 3 3 directed perfect intervals (counterpoint) 7 3 law of recovery (counterpoint) 14 ≥3 3 3 contrapuntal cadence (counterpoint) 14, 97102 2,3 3 3 melodic minor ascending line (counterpoint) 7 4 triads and seventh chords 2630 1 3 4 triads and seventh chords: quality 140144 1 3 4 triads and seventh chords: inversion 113117 1 3 5 figured bass 113117 1,2 3 5 roman numerals 8185,129133 1 3 6 melodic reduction (Schenkerian analysis) 7 7 passing tone (tones of figuration) 14, 134144 3 3 7 neighbor tone (tones of figuration) 14, 134144 3 3 7 changing tone (tones of figuration) 14, 134144 4 3 7 appoggiatura (tones of figuration) 14, 134144 3 3 7 escape tone (tones of figuration) 14, 134144 3 3 7 suspension (tones of figuration) 14, 134144 3 3 7 anticipation (tones of figuration) 14, 134144 3 3 7 pedal point (tones of figuration) 14 ≥ 3 3 7 (un)accented (tones of figuration) 7 7 chromaticism (tones of figuration) 7 8 tonic (function) 7 8 dominant (function) 7 8 authentic cadence 1,4,8185,129133 2,3 3 8 half cadence 8185,129133 2,3 3 9 voice range (fourpart texture) 14 1 3 9 voice spacing (fourpart texture) 3141 1 3 9 voice exchange (fourpart texture) 2025 2 3 9 voice crossing (fourpart texture) 5363 1 3 9 voice overlapping (fourpart texture) 7 9 tendency tone (fourpart texture) 1619 1,2 3 9 doubling (fourpart texture) 8691 1 3 10 harmonic reduction (secondlevel analysis) 7 11 expansion chord 7 12 predominant (function) 7 13 phrase model 7 14 pedal or neighbor (sixfour chord) 4,113117 3 3 14 passing (sixfour chord) 4,113117 3 3 14 arpeggiated (sixfour chord) 7 14 cadential (sixfour chord) 85,113117,133 3,4 3 15 embedded phrase model 7 16 nondominant seventh chord (function) 7 17 tonic substitute (submediant chord) 7 17 deceptive cadence (submediant chord) 8185,129133 2,3 3 18 functional substitute (mediant chord) 7 19 backrelating dominant 8185,129133 2,3 3 20 period (I) 7 21 period (II) 7 22 period (III) 7\nTable 2 (cont.)\nFrom another source of music theory considering music symmetries (Tymoczko, 2010), we compare ILLgenerated rules with a set of commonly used music operations, known as the OPTIC operations, namely octave shifts (O), permutations (P), transpositions (T), inversions (I), and cardinality changes (C). The results are summarized in Table 3, which shows that ILL covers the major four types of operations (OPTI). The music C operation is not recovered since it is not a transformation in the mathematical sense. Notations: tv denotes a translation by the translation vector v, i.e., tv(x) := x+v; rA denotes a rotation (can be proper or improper) by the rotation matrix A, i.e., rA(x) := Ax. As a special type of rotation matrices, P (··· ) denotes a permutation matrix where the superscript is the cycle notation of a permutation. Note that ILL, as a general framework, considers a much larger universe of generic symmetries (from Core Knowledge) beyond those already considered in music. Therefore, ILL can not only study existing music symmetries, but also suggest new symmetries to be exploited in new music styles as well as possible music interpretations of symmetries discovered in other fields like chemistry as described in the main paper.\nLastly, we mention a few new rules discovered by ILL that are interesting to our colleagues in the School of Music. First, tritone resolution plays an important role in tonal music and appears as an epitome in many more general harmonic resolutions. Yet, in Bach’s chorales, tritones are sometimes not resolved in a typical way but consistently transition to another dissonance like the minor seventh, which behaves like a harmonic version of an escape tone or changing tone. Second, a new notion of “the interval of intervals” has been consistently extracted in several ILLgenerated rule traces. Such a “second derivative”, like acceleration in mechanics, might suggest a new microscopic chord structure to consider. Third, new symmetry patterns reveal new possible foundations for building chords, and thus new composition possibilities. For example, as a parallel concept of harmony traditionally built on figured bass (which is indeed the dominant pattern in Bach’s chorales confirmed by ILL), ILL reveals “figured soprano” as the next alternative in explaining Bach’s music (Figure 9). Clearly, figured soprano is not the best view for explaining Bach according to ILL and is indeed not included in any standard music theory class, yet it may be a more efficient perspective to view other types of music (e.g., in some Jazz improvisations). This vision coincides with comments made by Casey Sokol (Sokol, 2016), a music professor at York University, which we quote below: “The idea of Figured Soprano is simply a way of taking this thinking from the topdown and bringing it into greater prominence as a creative gesture. So these exercises are not anything new in their ideation, but they can bring many new ideas, chord progressions and much else. It’s a somewhat neglected area of harmonic study and it’s a lot of fun to play with.”\nG.2 HUMAN SUBJECT EXPERIMENT FOR ASSESSING INTERPRETABILITY\nOur second experiment is a human subject experiment, where we collect and assess humangenerated verbal interpretations of ILLgenerated music rules rendered as sophisticated symbolic and numeric objects. Our goal is to use the results here to reveal both the possibilities and challenges in such a process of decoding expressive messages from AI sources. We treat this as a first step towards (a) a better design of AI representations that are humaninterpretable and (b) a general methodology to evaluate interpretability of AIdiscovered knowledge representations. In this experiment, we want to test to what degree our ILLgenerated rules are interpretable. Our subject pool includes people who have entrylevel math and music theory knowledge. So, by interpretability, we mean interpretable to them. The whole experimental procedure divides into two stages. At the first stage, we collect human interpretations of ILLgenerated rules. At the second stage, we assess the collected interpretations to further evaluate the interpretability of AIproduced knowledge.\nCollect Human Interpretations. The experiment was conducted in the form of a twoweek written homework assignment for 23 students. Students came from the CS+Music degree program recently launched in our university. Entrylevel knowledge of computer science, related math, and music theory is assumed from every student. However, all students are new to our AI system, and none have read any ILLgenerated rules before. The homework contained three parts. Part I provided detailed instructions on the format of the rules as exemplified in Figure 8, including both featurerelated and probabilityrelated instructions (symmetries were excluded from the tags since group theory is an unfamiliar subject to these students). More specifically, we provided verbal definition, mathematical representation, and typical examples for each of the following terms: chord, window (for coordinate selection), seed feature, feature, rule, ngram, histogram, data set. A faithful understanding of these eight terms was the only prerequisite to complete the homework. The estimated reading time of the\ninstructions was about an hour. Once this selftraining part was completed, the students were ready to go to the second and third parts—the main body of the homework. Part II contained eleven 1gram rules—a histogram specified by window and seed feature(s); Part III contained fourteen 2gram rules—a histogram now specified by window, seed feature(s), and a conditional. The students were asked to freely write what they saw in each of the histograms guided by the following two questions. (a) Does the histogram agree or disagree with any of the music concepts/rules you know (write down the music concepts/rules in musictheoretic terms)? (b) Does the histogram suggest something new (i.e., neither an agreement nor a disagreement, with no clear connection to any known knowledge)? Answers to each of the 25 rules came in the form of text, containing word descriptions that “decode” the histogram—a symbolic and pictorial encoding. Students were explicitly instructed that writing out a description that was basically a literal repetition of the histogram (e.g., taking a modulo 12 of a chord results in a 91.2% chance of being 0, 0, 4, 7) is not acceptable: they must reveal the music behind the math. In fact, we made it clear to the students that we only want qualitative descriptions. Students were specifically told (in the instructions) to only pay attention to the relative values of the probabilities whose exact numbers are unimportant (e.g., what are most likely, what are more likely, what are almost impossible). This homework was due in two weeks. During the twoweek period, we asked the students to complete it independently, with no group work or office hours.\nAssess Human Interpretations. The homework was designed in a way such that every rule historgram encoded at least one music concept/rule consistent with standard music theory. In addition, every histogram contained either one additional known music rule or something strange that either conflicted with a known rule or represented something new. We assigned two points per rule. Further, we made an initial rubric containing the (authoritative) music keywords used to describe every rule histogram. Because students’ answers arrived in the form of qualitative text, to ensure credibility and fairness of the initial rubric, we held a discussion session at a regular lecture time (80 minutes) with all students as well as the teaching staff. During the discussion session, we went over all 25 rules one by one. For each, we first announced the keywords in the initial rubric and explained to the students that these keywords would later be used to grade their homework. However, in the discussion session, every student was encouraged to object to any of our announced keywords and/or to propose new keywords accompanied with a convincing explanation. New/modified keywords that were commonly agreed upon were added/updated to the initial rubric. By the end of discussion session, we compiled a more inclusive rubric containing broadly accepted keywords. This rubricgenerating process was transparent to all the students. In the final step, we manually graded every student’s answer sheet against keywords in the rubric and computed their scores. A summary of the students’ performances is presented in Table 4. Except for cases where the student did not do the homework, a major source of score deduction was from misunderstanding the ngram (e.g., the probability of the current chord conditioned on the previous chord was mistakenly interpreted as the probability of the previous chord conditioned on the current one). This may be largely due to unfamiliarity with the ngram models for new CS+Music students. Nevertheless, the majority of the students who did the homework (2/3) succeeded (with respect to the 30/50 passing grade) in interpreting the rules generated from ILL, which in turn provides evidence on the interpretability of the AIproduced knowledge itself."
},
{
"heading": "H CONCLUSION AND BROADER IMPACTS",
"text": "Model transparency and interpretability are important for trustworthy AI, especially when interacting directly with people such as scientists, artists, and even multidisciplinary researchers bridging the Two\nCultures (Snow, 1959) (e.g., like music and chemistry). The core philosophy underlying ILL arises from a humancentered standpoint and our longterm pursuit of “getting humanity back into artificial intelligence”. We strive to develop humanlike artificial intelligence, which in turn may help advance human intelligence—a goal at the intersection of AGI (artificial general intelligence (Goertzel & Pennachin, 2007)), XAI (explainable artificial intelligence (Adadi & Berrada, 2018)), and “AI as augmented intelligence” (Jordan, 2019).\nAs such, the focus of interpretability in this line of research is not just the end result of the model, but the entire learning process. This emphasis on process is not only manifest in this paper (e.g., twophase learning that “starts like a baby and learns like a child” with a full rule trace as output), but also in ongoing ILLdriven realworld projects aimed for beneficent societal impact. To name a few: (a) ILLaided scientific research to accelerate new discoveries, as in biology (Yu et al., 2019); (b) ILLaided artistic creation to enable new ways and new dimensions in one’s creative and/or collaborative experience (art as a process is about more than the work itself); (c) ILLaided personalized education. Discovered scientific knowledge, artistic expression, and educational curricula, may have a dual use character (Kaiser & Moreno, 2012). Nevertheless, making the discovery of abstract knowledge easier may lead to abstraction traps (Selbst et al., 2019) in deploying such learned knowledge in engineering design or policy making.\nEvaluation for ILL and similar technologies should have a humanist perspective, whether comparing to humancodified knowledge or with human subject experiments to assess interpretability. Moreover, evaluations for scientific discovery, artistic creativity, and personalized education should not only focus on model performance, but also on the humancentered criteria of how effectively they aid people in achieving their goals. Illuminating rules associated with practice not only helps human students be better rulefollowers, but more creative rulebreakers and rulemakers. Instead of a Turing test for machinegenerated music, one might more productively conduct artistic evaluation at a metalevel between humanwritten music constructed with and without assistance from ILL.\nRegarding biases in data, because ILL works in the “small data” regime makes it easier to curate data to avoid representation biases (Suresh & Guttag, 2019). Manually curating 370 music compositions is possible, but manually curating a billion is not.\nILL can be treated as a complement to many existing AI models, with a special focus on model transparency and explainability. Extensions to ILL could enable it to better cooperate with other models, e.g., as a preprocessing or a postinterpretation tool to achieve superior task performance as well as controllability and interpretability. One such possibility could leverage ILL to analyze the attention matrices (as signals) learned from a Transformerbased NLP model like BERT or GPT (Rogers et al., 2020)."
}
]
 2,020
 
SP:1ee00313e354c4594bbf6cf8bdbe33e3ec8df62f
 [
"This paper proposes searching for an architecture generator that outputs good student architectures for a given teacher. The authors claim that by learning the parameters of the generator instead of relying directly on the search space, it is possible to explore the search space of architectures more effectively, increasing the diversity of the architectures explored. They show that this approach combined with the standard knowledge distillation loss is able to learn good student architectures requiring substantially less samples and achieving competitive performances when comparing to other knowledge distillation algorithms."
]
 Stateoftheart results in deep learning have been improving steadily, in good part due to the use of larger models. However, widespread use is constrained by device hardware limitations, resulting in a substantial performance gap between stateoftheart models and those that can be effectively deployed on small devices. While Knowledge Distillation (KD) theoretically enables small student models to emulate larger teacher models, in practice selecting a good student architecture requires considerable human expertise. Neural Architecture Search (NAS) appears as a natural solution to this problem but most approaches can be inefficient, as most of the computation is spent comparing architectures sampled from the same distribution, with negligible differences in performance. In this paper, we propose to instead search for a family of student architectures sharing the property of being good at learning from a given teacher. Our approach AutoKD, powered by Bayesian Optimization, explores a flexible graphbased search space, enabling us to automatically learn the optimal student architecture distribution and KD parameters, while being 20× more sample efficient compared to existing stateoftheart. We evaluate our method on 3 datasets; on large images specifically, we reach the teacher performance while using 3× less memory and 10× less parameters. Finally, while AutoKD uses the traditional KD loss, it outperforms more advanced KD variants using handdesigned students.
 []
 [
{
"authors": [
"Sungsoo Ahn",
"Shell Xu Hu",
"Andreas Damianou",
"Neil D Lawrence",
"Zhenwen Dai"
],
"title": "Variational information distillation for knowledge transfer",
"venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,",
"year": 2019
},
{
"authors": [
"Tom B Brown",
"Benjamin Mann",
"Nick Ryder",
"Melanie Subbiah",
"Jared Kaplan",
"Prafulla Dhariwal",
"Arvind Neelakantan",
"Pranav Shyam",
"Girish Sastry",
"Amanda Askell"
],
"title": "Language models are fewshot learners",
"venue": null,
"year": 2005
},
{
"authors": [
"Fabio Carlucci",
"Pedro M. Esperança",
"Marco Singh",
"Antoine Yang",
"Victor Gabillon",
"Hang Xu",
"Zewei Chen",
"Jun Wang"
],
"title": "MANAS: Multiagent neural architecture",
"venue": null,
"year": 1909
},
{
"authors": [
"Yu Cheng",
"Duo Wang",
"Pan Zhou",
"Tao Zhang"
],
"title": "A survey of model compression and acceleration for deep neural networks",
"venue": null,
"year": 2017
},
{
"authors": [
"Tejalal Choudhary",
"Vipul Mishra",
"Anurag Goswami",
"Jagannathan Sarangapani"
],
"title": "A comprehensive survey on model compression and acceleration",
"venue": "Artificial Intelligence Review,",
"year": 2020
},
{
"authors": [
"Jia Deng",
"Wei Dong",
"Richard Socher",
"LiJia Li",
"Kai Li",
"Li FeiFei"
],
"title": "ImageNet: A largescale hierarchical image database",
"venue": "In Computer Vision and Pattern Recognition (CVPR),",
"year": 2009
},
{
"authors": [
"Stefan Falkner",
"Aaron Klein",
"Frank Hutter"
],
"title": "BOHB: Robust and efficient hyperparameter optimization at scale",
"venue": "In International Conference on Machine Learning (ICML),",
"year": 2018
},
{
"authors": [
"Emile Fiesler",
"Amar Choudry",
"H John Caulfield"
],
"title": "Weight discretization paradigm for optical neural networks",
"venue": "Optical interconnections and networks,",
"year": 1990
},
{
"authors": [
"Jindong Gu",
"Volker Tresp"
],
"title": "Search for better students to learn distilled knowledge",
"venue": "arXiv preprint arXiv:2001.11612,",
"year": 2020
},
{
"authors": [
"Song Han",
"Jeff Pool",
"John Tran",
"William Dally"
],
"title": "Learning both weights and connections for efficient neural network",
"venue": "In Advances in neural information processing systems,",
"year": 2015
},
{
"authors": [
"Geoffrey Hinton",
"Oriol Vinyals",
"Jeff Dean"
],
"title": "Distilling the knowledge in a neural network",
"venue": null,
"year": 2015
},
{
"authors": [
"Geoffrey Hinton",
"Oriol Vinyals",
"Jeff Dean"
],
"title": "Distilling the knowledge in a neural network",
"venue": "arXiv preprint arXiv:1503.02531,",
"year": 2015
},
{
"authors": [
"Yanping Huang",
"Youlong Cheng",
"Ankur Bapna",
"Orhan Firat",
"Dehao Chen",
"Mia Chen",
"HyoukJoong Lee",
"Jiquan Ngiam",
"Quoc V Le",
"Yonghui Wu"
],
"title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism",
"venue": "In Advances in Neural Information Processing Systems (NeurIPS,",
"year": 2019
},
{
"authors": [
"Zehao Huang",
"Naiyan Wang"
],
"title": "Like what you like: Knowledge distill via neuron selectivity transfer",
"venue": "arXiv preprint arXiv:1707.01219,",
"year": 2017
},
{
"authors": [
"Alexander Kolesnikov",
"Lucas Beyer",
"Xiaohua Zhai",
"Joan Puigcerver",
"Jessica Yung",
"Sylvain Gelly",
"Neil Houlsby"
],
"title": "Big Transfer (BiT): General visual representation learning",
"venue": "In European Conference on Computer Vision (ECCV),",
"year": 2020
},
{
"authors": [
"Alex Krizhevsky"
],
"title": "Learning multiple layers of features from tiny images",
"venue": "Technical report, University of Toronto,",
"year": 2009
},
{
"authors": [
"Yann LeCun",
"John S Denker",
"Sara A Solla"
],
"title": "Optimal brain damage",
"venue": "In Advances in neural information processing systems,",
"year": 1990
},
{
"authors": [
"Changlin Li",
"Jiefeng Peng",
"Liuchun Yuan",
"Guangrun Wang",
"Xiaodan Liang",
"Liang Lin",
"Xiaojun Chang"
],
"title": "Blockwisely supervised neural architecture search with knowledge distillation",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,",
"year": 2020
},
{
"authors": [
"Hao Li",
"Asim Kadav",
"Igor Durdanovic",
"Hanan Samet",
"Hans Peter Graf"
],
"title": "Pruning filters for efficient convnets",
"venue": null,
"year": 2016
},
{
"authors": [
"Lisha Li",
"Kevin Jamieson",
"Giulia DeSalvo",
"Afshin Rostamizadeh",
"Ameet Talwalkar"
],
"title": "Hyperband: A novel banditbased approach to hyperparameter optimization",
"venue": null,
"year": 2016
},
{
"authors": [
"Hanxiao Liu",
"Karen Simonyan",
"Yiming Yang"
],
"title": "DARTS: Differentiable architecture search",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2019
},
{
"authors": [
"Yu Liu",
"Xuhui Jia",
"Mingxing Tan",
"Raviteja Vemulapalli",
"Yukun Zhu",
"Bradley Green",
"Xiaogang Wang"
],
"title": "Search to distill: Pearls are everywhere but not the eyes",
"venue": "arXiv preprint arXiv:1911.09074,",
"year": 2019
},
{
"authors": [
"Hieu Pham",
"Melody Guan",
"Barret Zoph",
"Quoc Le",
"Jeff Dean"
],
"title": "Efficient neural architecture search via parameter sharing",
"venue": "In International Conference on Machine Learning (ICML),",
"year": 2018
},
{
"authors": [
"Antonio Polino",
"Razvan Pascanu",
"Dan Alistarh"
],
"title": "Model compression via distillation and quantization",
"venue": "arXiv preprint arXiv:1802.05668,",
"year": 2018
},
{
"authors": [
"Ariadna Quattoni",
"Antonio Torralba"
],
"title": "Recognizing indoor scenes",
"venue": "In Computer Vision and Pattern Recognition (CVPR), pp",
"year": 2009
},
{
"authors": [
"Mohammad Rastegari",
"Vicente Ordonez",
"Joseph Redmon",
"Ali Farhadi"
],
"title": "Xnornet: Imagenet classification using binary convolutional neural networks",
"venue": "In European conference on computer vision,",
"year": 2016
},
{
"authors": [
"Esteban Real",
"Sherry Moore",
"Andrew Selle",
"Saurabh Saxena",
"Yutaka Leon Suematsu",
"Jie Tan",
"Quoc V Le",
"Alexey Kurakin"
],
"title": "Largescale evolution of image classifiers",
"venue": "In International Conference on Machine Learning (ICML),",
"year": 2017
},
{
"authors": [
"Binxin Ru",
"Pedro M Esperanca",
"Fabio M Carlucci"
],
"title": "Neural Architecture Generator Optimization",
"venue": "In Neural Information Processing Systems (NeurIPS),",
"year": 2020
},
{
"authors": [
"Daniel Soudry",
"Itay Hubara",
"Ron Meir"
],
"title": "Expectation backpropagation: Parameterfree training of multilayer neural networks with continuous or discrete weights",
"venue": "In Advances in neural information processing systems,",
"year": 2014
},
{
"authors": [
"Christian Szegedy",
"Sergey Ioffe",
"Vincent Vanhoucke",
"Alex Alemi"
],
"title": "Inceptionv4, inceptionresnet and the impact of residual connections on learning",
"venue": "arXiv preprint arXiv:1602.07261,",
"year": 2016
},
{
"authors": [
"Yonglong Tian",
"Dilip Krishnan",
"Phillip Isola"
],
"title": "Contrastive representation distillation",
"venue": "In International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"Ilya Trofimov",
"Nikita Klyuchnikov",
"Mikhail Salnikov",
"Alexander Filippov",
"Evgeny Burnaev"
],
"title": "Multifidelity neural architecture search with knowledge distillation",
"venue": "arXiv preprint arXiv:2006.08341,",
"year": 2020
},
{
"authors": [
"Frederick Tung",
"Greg Mori"
],
"title": "Similaritypreserving knowledge distillation",
"venue": "In Proceedings of the IEEE International Conference on Computer Vision,",
"year": 2019
},
{
"authors": [
"Saining Xie",
"Alexander Kirillov",
"Ross Girshick",
"Kaiming He"
],
"title": "Exploring randomly wired neural networks for image recognition",
"venue": null,
"year": 1904
},
{
"authors": [
"Antoine Yang",
"Pedro M Esperança",
"Fabio M Carlucci"
],
"title": "NAS evaluation is frustratingly hard",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2020
},
{
"authors": [
"Li Yuan",
"Francis EH Tay",
"Guilin Li",
"Tao Wang",
"Jiashi Feng"
],
"title": "Revisiting knowledge distillation via label smoothing regularization",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,",
"year": 2020
},
{
"authors": [
"Sukmin Yun",
"Jongjin Park",
"Kimin Lee",
"Jinwoo Shin"
],
"title": "Regularizing classwise predictions via selfknowledge distillation",
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,",
"year": 2020
},
{
"authors": [
"Sergey Zagoruyko",
"Nikos Komodakis"
],
"title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer",
"venue": "arXiv preprint arXiv:1612.03928,",
"year": 2016
},
{
"authors": [
"Arber Zela",
"Aaron Klein",
"Stefan Falkner",
"Frank Hutter"
],
"title": "Towards automated deep learning: Efficient joint neural architecture and hyperparameter search",
"venue": null,
"year": 2018
},
{
"authors": [
"Chenzhuo Zhu",
"Song Han",
"Huizi Mao",
"William J Dally"
],
"title": "Trained ternary quantization",
"venue": "arXiv preprint arXiv:1612.01064,",
"year": 2016
},
{
"authors": [
"Xiatian Zhu",
"Shaogang Gong"
],
"title": "Knowledge distillation by onthefly native ensemble",
"venue": "In Advances in neural information processing systems,",
"year": 2018
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Recentlydeveloped deep learning models have achieved remarkable performance in a variety of tasks. However, breakthroughs leading to stateoftheart (SOTA) results often rely on very large models: GPipe, Big Transfer and GPT3 use 556 million, 928 million and 175 billion parameters, respectively (Huang et al., 2019; Kolesnikov et al., 2020; Brown et al., 2020).\nDeploying these models on user devices (e.g. smartphones) is currently impractical as they require large amounts of memory and computation; and even when large devices are an option (e.g. GPU clusters), the cost of largescale deployment (e.g. continual inference) can be very high (Cheng et al., 2017). Additionally, target hardware does not always natively or efficiently support all operations used by SOTA architectures. The applicability of these architectures is, therefore, severely limited, and workarounds using smaller or simplified models lead to a performance gap between the technology available at the frontier of deep learning research and that usable in industry applications.\nIn order to bridge this gap, Knowledge Distillation (KD) emerges as a potential solution, allowing small student models to learn from, and emulate the performance of, large teacher models (Hinton et al., 2015a). The student model can be constrained in its size and type of operations used, so that it will satisfy the requirements of the target computational environment. Unfortunately, successfully achieving this in practice is extremely challenging, requiring extensive human expertise. For example, while we know that the architecture of the student is important for distillation (Liu et al., 2019b), it remains unclear how to design the optimal network given some hardware constraints.\nWith Neural Architecture Search (NAS) it is possible to discover an optimal student architecture. NAS automates the choice of neural network architecture for a specific task and dataset, given a search space of architectures and a search strategy to navigate that space (Pham et al., 2018; Real et al., 2017; Liu et al., 2019a; Carlucci et al., 2019; Zela et al., 2018; Ru et al., 2020). One im\nportant limitation of most NAS approaches is that the search space is very restricted, with a high proportion of resources spent on evaluating very similar architectures, thus rendering the approach limited in its effectiveness (Yang et al., 2020). This is because traditional NAS approaches have no tools for distinguishing between architectures that are similar and architectures that are very different; as a consequence, computational resources are needed to compare even insignificant changes in the model. Conversely, properly exploring a large space requires huge computational resources: for example, recent work by Liu et al. (2019b) investigating how to find the optimal student requires evaluating 10, 000 models. By focusing on the comparison between distributions we ensure to use computational resources only on meaningful differences, thus performing significantly more efficiently: we evaluate 33× less architectures than the most related work to ours (Liu et al., 2019b). To overcome these limitations, we propose an automated approach to knowledge distillation, in which we look for a family of good students rather than a specific model. We find that even though our method, AutoKD, does not output one specific architecture, all architectures sampled from the optimal family of students perform well when trained with KD. This reformulation of the NAS problem provides a more expressive search space containing very diverse architectures, thus increasing the effectiveness of the search procedure in finding good student networks.\nOur contributions are as follows: (A) a framework for combining KD with NAS and effectively emulate large models while using a fraction of the memory and of the parameters; (B) By searching for an optimal student family, rather than for specific architectures, our algorithm is up to 20x more sample efficient than alternative NASbased KD solutions; (C) We significantly outperform advanced KD methods on a benchmark of vision datasets, despite using the traditional KD loss, showcasing the efficacy of our found students."
},
{
"heading": "2 RELATED WORK",
"text": "Model compression has been studied since the beginning of the machine learning era, with multiple solutions being proposed (Choudhary et al., 2020; Cheng et al., 2017). Pruning based methods allow the removal of nonessential parameters from the model, with littletonone drop in final performance. The primary motive of these approaches was to reduce the storage requirement, but they can also be used to speed up the model (LeCun et al., 1990; Han et al., 2015; Li et al., 2016a). The idea behind quantization methods is to reduce the number of bits used to represent the weights and the activations in a model; depending on the specific implementation this can lead to reduced storage, reduced memory consumption and a general speedup of the network (Fiesler et al., 1990; Soudry et al., 2014; Rastegari et al., 2016; Zhu et al., 2016). In low rank factorization approaches, a given weight matrix is decomposed into the product of smaller ones, for example using singular value decomposition. When applied to fully connected layers this leads to reduced storage, while when applied to convolutional filters, it leads to faster inference (Choudhary et al., 2020).\nAll the above mentioned techniques can successfully reduce the complexity of a given model, but are not designed to substitute specific operations. For example, specialized hardware devices might only support a small subset of all the operations offered by modern deep learning frameworks. In Knowledge Distillation approaches, a large model (the teacher) distills its knowledge into a smaller student architecture (Hinton et al., 2015b). This knowledge is assumed to be represented in the neural network’s output distribution, hence in the standard KD framework, the output distribution of a student’s network is optimized to match the teacher’s output distribution for all the training data (Yun et al., 2020; Ahn et al., 2019; Yuan et al., 2020; Tian et al., 2020; Tung & Mori, 2019).\nThe work of Liu et al. (2019b) shows that the architecture of a student network is a contributing factor in its ability to learn from a given teacher. The authors propose combining KD with a traditional NAS pipeline, based on Reinforcement Learning, to find the optimal student. While this setup leads to good results, it does so at a huge computational cost, requiring over 5 days on 200 TPUs. Similarly, Gu & Tresp (2020) also look for the optimal student architecture, but do so by searching for a subgraph of the original teacher; therefore, it cannot be used to substitute unsupported operations.\nOrthogonal approaches, looking at how KD can improve NAS, are explored by Trofimov et al. (2020) and Li et al. (2020). The first establishes that KD improves the correlation between different budgets in multifidelity methods, while the second uses the teacher supervision to search the architecture in a blockwise fashion."
},
{
"heading": "3 SEARCHING FOR THE OPTIMAL STUDENT NETWORK GENERATOR",
"text": "The AutoKD framework (Fig. 1) combines Bayesian Optimization (BO), Neural Architecture Search (NAS) and Knowledge Distillation (KD). AutoKD defines a family of random network generators G(θ) parameterized by a hyperparameter θ, from where student networks are sampled. BO uses a surrogate model to propose generator hyperparameters, while students from these generators are trained with KD using a stateoftheart teacher network. The student performances are evaluated and provided as feedback to update the BO surrogate model. To improve our BO surrogate model, the search procedure is iterated, until the best family of student networks G(θ∗) is selected. In this section we specify all components of AutoKD. See also Fig. 1 and Algorithm 1 for an overview."
},
{
"heading": "3.1 KNOWLEDGE DISTILLATION",
"text": "Knowledge Distillation (KD; Hinton et al., 2015b) is a method to transfer, or distill, knowledge from one model to another—usually from a large model to small one—such that the small student model learns to emulate the performance of the large teacher model. KD can be formalized as minimizing the objective function:\nLKD = ∑ xi∈X l(fT (xi), fS(xi)) (1)\nwhere l is the loss function that measures the difference in performance between the teacher fT and the student fS , xi is the ith input, yi is the ith target. The conventional loss function l used in practice is a linear combination of the traditional cross entropy loss LCE and the Kullback–Leibler divergence LKL of the presoftmax outputs for fT and fS :\nl = (1− α)LCE + αLKL (σ (fT (xi)/τ) , σ (fS(xi)/τ)) (2)\nwhere σ is the softmax function σ(x) = 1/(1 + exp(−x)), and τ is the softmax temperature. Hinton et al. (2015b) propose “softening” the probabilities using temperature scaling with τ ≥ 1. The parameter α represents the weight tradeoff between the KL loss and the cross entropy loss LCE. The LKD loss is characterized by the hyperparameters: α and τ ; popular choices are τ ∈ {3, 4, 5} and α = 0.9 (Huang & Wang, 2017; Zagoruyko & Komodakis, 2016; Zhu et al., 2018). Numerous other methods (Polino et al., 2018; Huang & Wang, 2017; Tung & Mori, 2019) can be formulated as a form of Equation (2), but in this paper we use the conventional loss function l.\nTraditionally in KD, both the teacher and the student network have predefined architectures. In contrast, AutoKD defines a search space of student network architectures and finds the optimal student by leveraging neural architecture search, as detailed below."
},
{
"heading": "3.2 STUDENT SEARCH VIA GENERATOR OPTIMIZATION",
"text": "Most NAS method for vision tasks employ a cellbased search space, where networks are built by stacking building blocks (cells) and the operations inside the cell are searched (Pham et al., 2018; Real et al., 2017; Liu et al., 2019a). This results in a single architecture being output by the NAS procedure. In contrast, more flexible search spaces have recently been proposed that are based on\nAlgorithm 1: AutoKD 1: Input: Network generator G, BOHB hyperparameters(η, training budget bmin and bmax),\nEvaluation function fKD(θ, b) which assesses the validation performance of a generator hyperparameterθ by sampling an architecture from G(θ) and training it with the KD loss LKD (equations 1 and 2) for b epochs.\n2: smax = blogη bmaxbmin c; 3: for s ∈ {smax, smax − 1, . . . , 0} do 4: Sample M = d smax+1s+1 · η\nse generator hyperparameters Θ = {θj}Mj=1 which maximises the raito of kernel density estimators ; . (Falkner et al., 2018, Algorithm 2)\n5: Initialise b = ηs · bmax ; . Run Successive Halving (Li et al., 2016b) 6: while b ≤ bmax do 7: L = {fKD(θ, b) : θ ∈ Θ}; 8: Θ = top k(Θ,L, bΘ/ηc); 9: b = η · b;\n10: end while 11: end for 12: Obtain the best performing configuration θ∗ for the student network generator. 13: Sample k architectures from G(θ∗), train them to completion, and obtain test performance.\nneural network generators (Xie et al., 2019; Ru et al., 2020). The generator hyperparameters define the characteristics of the family of networks being generated.\nNAGO optimizes an architecture generator instead of a single architecture and proposes a hierarchical graphbased space which is highly expressive yet lowdimensional (Ru et al., 2020). Specifically, the search space of NAGO comprises three levels of graphs (where the node in the higher level is a lowerlevel graph). The top level is a graph of cells (Gtop) and each cell is itself a graph of middlelevel modules (Gmid). Each module further corresponds to a graph of bottomlevel operation units (Gbottom) such as a reluconv3×3bn triplet. NAGO adopts three random graph generators to define the connectivity/topology of Gtop, Gmid and Gbottom respectively, and thus is able to produce a wide variety of architectures with only a few generator hyperparameters. AutoKD employs NAGO as the NAS backbone for finding the optimal student family.\nOur pipeline consists of two phases. In the first phase (search), a multifidelity Bayesian optimisation technique, BOHB (Falkner et al., 2018), is employed to optimise the lowdimensional search space. BOHB uses partial evaluations with smallerthanfull budget to exclude bad configurations early in the search process, thus saving resources to evaluate more promising configurations. Given the same time constraint, BOHB evaluates many more configurations than conventional BO which evaluates all configurations with full budget. As Ru et al. (2020) empirically observe that good generator hyperparameters lead to a tight distribution of wellperforming architectures (small performance standard deviation), we similarly assess the performance of a particular generator hyperparameter value with only one architecture sample. In the second phase (retrainA), AutoKD uniformly samples multiple architectures from the optimal generator found during the search phase and evaluates them with longer training budgets to obtain the best architecture performance.\nInstead of the traditionally used crossentropy loss, AutoKD uses the KD loss in equation 2 to allow the sampled architecture to distill knowledge from its teacher. The KD hyperparameters temperature τ and loss weight α are included in the search space and optimized simultaneously with the architecture to ensure that the student architectures can efficiently distill knowledge both from the designated teacher and the data distribution. A full overview of the framework is shown in Fig. 1."
},
{
"heading": "4 EXPERIMENTS",
"text": "The first part of this section studies how KD can improve the performance of our chosen NAS backbone (NAGO). In the second part, we show how a family of students, when trained with KD (AutoKD), can emulate much larger teachers, significantly outperforming current handcrafted architectures.\nExperimental setup. All of our experiments were run on the two, smallimage, standard object recognition datasets CIFAR10 and CIFAR100 (Krizhevsky, 2009), as well as MIT67 for largeimage scene recognition (Quattoni & Torralba, 2009). We limit the number of student network parameters to 4.0M for smallimage tasks and 6.0M for largeimage tasks. Following Liu et al. (2019b), we picked InceptionResnetV2 (Szegedy et al., 2016) as a teacher for the large image dataset. As that model could not be directly applied to small images, and to explore the use of a machinedesigned network as a teacher, we decided to use the best DARTS (Liu et al., 2019a) architecture to guide the search on the CIFAR datasets. For ImageNet (Deng et al., 2009), we use a InceptionResnetV2 teacher. All experiments are run on NVIDIA Tesla V100 GPUs.\nNAS implementation. Our approach follows the search space and BObased search protocol proposed by NAGO (Ru et al., 2020), as such our student architectures are based on hierarchical random graphs. Likewise, we employ a multifidelity evaluation scheme based on BOHB (Falkner et al., 2018) where candidates are trained for different epochs (30, 60 and 120) and then evaluated on the validation set. In total, only ∼300 models are trained during the search procedure: using 8 GPUs, this amounts to∼2.5 days of compute on the considered datasets. At the end of the search, we sample 8 architectures from the best found generator, train them for 600 epochs (with KD, using the optimal temperature and loss weight found during the search), and report the average performance (top1 test accuracy). All remaining training parameters were set following Ru et al. (2020).\nIn AutoKD, we include the knowledge distillation hyperparameters, temperature and weight, in the search space, so that they are optimized alongside the architecture. The temperature ranges from 1 to 10, while the weight ranges from 0 to 1. Fig. 8 (Appendix) illustrates the importance of these hyperparameters when training a randomly sampled model, lending support to their inclusion."
},
{
"heading": "4.1 IMPACT OF KNOWLEDGE DISTILLATION ON NAS",
"text": "To understand the contribution from KD, we first compare vanilla NAGO with AutoKD on CIFAR100. Fig. 2 shows the validation accuracy distribution at different epochs: clearly, using KD leads to better performing models. Indeed this can be seen in more detail in Fig. 3, where we show the performance of the best found model vs the wall clock time for each budget. It is worth mentioning that while the KD version takes longer (as it needs to compute the lessons on the fly), it consistently outperforms vanilla NAGO by a significant margin on all three datasets.\nNote that accuracies in Fig. 3 refer to the best models found during the search process, while Fig. 2 shows the histograms of all models evaluated during search, which are by definition lower in accuracy, on average. At the end of search, the model is retrained for longer (as commonly done in NAS methods), thus leading to the higher accuracies also shown in Figs. 6, 7.\nNot only does AutoKD offer better absolute performance, but it also enables better multifidelity correlation, as can be seen in Fig. 4. For example, the correlation between 30 and 120 epochs improves from 0.49 to 0.82 by using KD, a result that is consistent with the findings in Trofimov et al. (2020). Note that multifidelity methods work under the assumption that the rankings at different budgets remains consistent to guarantee that the best models progress to the next stage. A high correlation between the rankings is, as such, crucial."
},
{
"heading": "4.2 LARGE MODEL EMULATION",
"text": "At its core, AutoKD’s goal is to emulate the performance of large SOTA models with smaller students. Fig. 6 shows how the proposed method manages to reach the teacher’s performance while using only 1/9th of the memory on small image datasets. On MIT67, the found architecture is not only using 1/3rd of the memory, but also 1/10th of parameters. Finally, it is worth noting how AutoKD increases student performance, as such the high final accuracy cannot only be explained by the NAS procedure. Indeed, looking at Fig. 7 it is clear how KD improves both the speed of convergence and the final accuracy. Furthermore, as shown in Fig. 5, the optimal family of architectures is actually different when searched with KD.\nMIT67, CIFAR100, CIFAR10. Table 1 shows the comparison of AutoKD with other KD methods. Notice how learning the student architecture allows AutoKD to outperform a variety of more advanced KD approaches while emplying a smaller parameter count in the student. The exception to this is CIFAR10, where AutoKD outperforms other methods but with a larger number of parameters. This is because the default networks in the NAGO search space have 4M parameters, which is too large for this application. Accuracywise, the only method doing better on CIFAR100, Yuan et al. (2020), does so with a student with significantly more parameters (34M vs 4M). Finally, AutoKD is orthogonal to advanced KD approaches and could be combined with any of them for even further increases in performance.\nImageNet. The improved results on smaller datasets extend to large datasets as well. On ImageNet, AutoKD reaches 78.0% top1 accuracy, outperforming both Liu et al. (2019b) using the same teacher (75.5%) and vanilla NAGO (76.8%)."
},
{
"heading": "5 DISCUSSION AND CONCLUSION",
"text": "Improving Knowledge Distillation by searching for the optimal student architecture is a promising idea that has recently started to gain attention in the community (Liu et al., 2019b; Trofimov et al., 2020; Gu & Tresp, 2020). In contrast with earlier KDNAS approaches, which search for specific architectures, our method searches for a family of networks sharing the same characteristics. The\nmain benefit of this approach is sample efficiency: while traditional methods spend many computational resources evaluating similar architectures (Yang et al., 2020), AutoKD is able to avoid this pitfall: for instance, the method of Liu et al. (2019b) requires ∼10, 000 architecture samples, while AutoKD can effectively search for the optimal student family with only 300 samples. Compared to traditional KD methods, AutoKD is capable of achieving better performance with student architectures that have less parameters (and/or use less memory) than handdefined ones.\nOur message “DON’T BE PICKY” refers to the fact that the macrostructure (connectivity and capacity) of a network is more important than its microstructure (the specific operations). This has been shown to be true for nonKD NAS (Xie et al., 2019; Ru et al., 2020) and is here experimentally confirmed for KDNAS as well. Changing the focus of optimization in this way releases computational resources that can be used to effectively optimize the global properties of the network. Additionally, the fact that a family of architectures can characterized by a small number of hyperparameters makes the comparison of architectures more meaningful and interpretable. In the current implementation, AutoKD finds the optimal student family, in which all sampled architectures perform well: future work should explore how to fully exploit this distribution, possibly finetuning the network distribution to obtain an ever better performing model.\nTo summarize, AutoKD offers a strategy to efficiently emulate large, stateoftheart models with a fraction of the model size. Indeed, our family of searched students consistently outperforms the best handcrafted students on CIFAR10, CIFAR100 and MIT67."
}
]
 2,020
 
SP:eea3b3ec32cce61d6b6df8574cf7ce9376f2230a
 [
"The paper proposes a defense that works by adding multiple targeted adversarial perturbations (with random classes) on the input sample before classifying it. There is little theoretical reasoning for why this is a sensible defense. More importantly though, the defense is only evaluated in an oblivious threat model where the attacker is unaware of the defense mechanism. As has been argued again and again in the literature and in community guidelines such as [1, 2], the oblivious threat model is trivial and yields absolutely no insights into the effectiveness of a defense (e.g. you can just manipulate the backpropagated gradient in random ways to prevent any gradientbased attack from finding adversarial perturbations). The problem with oblivious attacks is clearly visible in the results section where more PGD iterations are less effective than fewer iterations  a clear red flag that the evaluation is ineffective. The paper also fails to point out that Pang et al. 2020, one of the methods they combine their method with, has been shown to be ineffective [2]."
]
 Studies show that neural networks are susceptible to adversarial attacks. This exposes a potential threat to neural networkbased artificial intelligence systems. We observe that the probability of the correct result outputted by the neural network increases by applying small perturbations generated for nonpredicted class labels to adversarial examples. Based on this observation, we propose a method of counteracting adversarial perturbations to resist adversarial examples. In our method, we randomly select a number of class labels and generate small perturbations for these selected labels. The generated perturbations are added together and then clamped onto a specified space. The obtained perturbation is finally added to the adversarial example to counteract the adversarial perturbation contained in the example. The proposed method is applied at inference time and does not require retraining or finetuning the model. We validate the proposed method on CIFAR10 and CIFAR100. The experimental results demonstrate that our method effectively improves the defense performance of the baseline methods, especially against strong adversarial examples generated using more iterations.
 []
 [
{
"authors": [
"Anish Athalye",
"Nicholas Carlini",
"David Wagner"
],
"title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples",
"venue": "International Conference on Machine Learning,",
"year": 2018
},
{
"authors": [
"Nicholas Carlini",
"David Wagner"
],
"title": "Adversarial examples are not easily detected: Bypassing ten detection methods",
"venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,",
"year": 2017
},
{
"authors": [
"Jacob Devlin",
"MingWei Chang",
"Kenton Lee",
"Kristina Toutanova. Bert"
],
"title": "Pretraining of deep bidirectional transformers for language understanding",
"venue": "arXiv preprint arXiv:1810.04805,",
"year": 2018
},
{
"authors": [
"Alhussein Fawzi",
"SeyedMohsen MoosaviDezfooli",
"Pascal Frossard"
],
"title": "Robustness of classifiers: from adversarial to random noise",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2016
},
{
"authors": [
"Ian J Goodfellow",
"Jonathon Shlens",
"Christian Szegedy"
],
"title": "Explaining and harnessing adversarial examples",
"venue": "International Conference on Learning Representations,",
"year": 2015
},
{
"authors": [
"Chuan Guo",
"Mayank Rana",
"Moustapha Cisse",
"Laurens Van Der Maaten"
],
"title": "Countering adversarial images using input transformations",
"venue": "International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Kaiming He",
"Xiangyu Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"title": "Deep residual learning for image recognition",
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,",
"year": 2016
},
{
"authors": [
"Shengyuan Hu",
"Tao Yu",
"Chuan Guo",
"WeiLun Chao",
"Kilian Q Weinberger"
],
"title": "A new defense against adversarial images: Turning a weakness into a strength",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2019
},
{
"authors": [
"Alex Krizhevsky",
"Geoffrey Hinton"
],
"title": "Learning multiple layers of features from tiny images",
"venue": null,
"year": 2009
},
{
"authors": [
"Alexey Kurakin",
"Ian Goodfellow",
"Samy Bengio"
],
"title": "Adversarial machine learning at scale",
"venue": "arXiv preprint arXiv:1611.01236,",
"year": 2016
},
{
"authors": [
"Alex Lamb",
"Vikas Verma",
"Juho Kannala",
"Yoshua Bengio"
],
"title": "Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy",
"venue": "In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security,",
"year": 2019
},
{
"authors": [
"Aleksander Madry",
"Aleksandar Makelov",
"Ludwig Schmidt",
"Dimitris Tsipras",
"Adrian Vladu"
],
"title": "Towards deep learning models resistant to adversarial attacks",
"venue": "International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Tianyu Pang",
"Chao Du",
"Jun Zhu"
],
"title": "Maxmahalanobis linear discriminant analysis networks",
"venue": "International Conference on Machine Learning,",
"year": 2018
},
{
"authors": [
"Tianyu Pang",
"Kun Xu",
"Jun Zhu"
],
"title": "Mixup inference: Better exploiting mixup to defend adversarial attacks",
"venue": "International Conference on Learning Representations,",
"year": 2020
},
{
"authors": [
"Jinhwan Park",
"Yoonho Boo",
"Iksoo Choi",
"Sungho Shin",
"Wonyong Sung"
],
"title": "Fully neural network based speech recognition on mobile and embedded devices",
"venue": "In Advances in Neural Information Processing Systems,",
"year": 2018
},
{
"authors": [
"Adam Paszke",
"Sam Gross",
"Soumith Chintala",
"Gregory Chanan",
"Edward Yang",
"Zachary DeVito",
"Zeming Lin",
"Alban Desmaison",
"Luca Antiga",
"Adam Lerer"
],
"title": "Automatic differentiation in pytorch",
"venue": null,
"year": 2017
},
{
"authors": [
"Christian Szegedy",
"Wojciech Zaremba",
"Ilya Sutskever",
"Joan Bruna",
"Dumitru Erhan",
"Ian Goodfellow",
"Rob Fergus"
],
"title": "Intriguing properties of neural networks",
"venue": "International Conference on Learning Representations,",
"year": 2014
},
{
"authors": [
"Pedro Tabacof",
"Eduardo Valle"
],
"title": "Exploring the space of adversarial images",
"venue": "In 2016 International Joint Conference on Neural Networks (IJCNN),",
"year": 2016
},
{
"authors": [
"Cihang Xie",
"Jianyu Wang",
"Zhishuai Zhang",
"Zhou Ren",
"Alan Yuille"
],
"title": "Mitigating adversarial effects through randomization",
"venue": "International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Hongyi Zhang",
"Moustapha Cisse",
"Yann N Dauphin",
"David LopezPaz"
],
"title": "mixup: Beyond empirical risk minimization",
"venue": "International Conference on Learning Representations,",
"year": 2018
},
{
"authors": [
"Huan Zhang",
"Hongge Chen",
"Chaowei Xiao",
"Sven Gowal",
"Robert Stanforth",
"Bo Li",
"Duane Boning",
"ChoJui Hsieh"
],
"title": "Towards stable and efficient training of verifiably robust neural networks",
"venue": "International Conference on Learning Representations,",
"year": 2020
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Deep neural networks (DNNs) have become the dominant approach for various tasks including image understanding, natural language processing and speech recognition (He et al., 2016; Devlin et al., 2018; Park et al., 2018). However, recent studies demonstrate that neural networks are vulnerable to adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015). That is, these network models make an incorrect prediction with high confidence for inputs that are only slightly different from correctly predicted examples. This reveals a potential threat to neural networkbased artificial intelligence systems, many of which have been widely deployed in realworld applications.\nThe adversarial vulnerability of neural networks reveals fundamental blind spots in the learning algorithms. Even with advanced learning and regularization techniques, neural networks are not learning the true underlying distribution of the training data, although they can obtain extraordinary performance on test sets. This phenomenon is now attracting much research attention. There have been increasing studies attempting to explain neural networks’ adversarial vulnerability and develop methods to resist adversarial examples (Madry et al., 2018; Zhang et al., 2020; Pang et al., 2020). While much progress has been made, most existing studies remain preliminary. Because it is difficult to construct a theoretical model to explain the adversarial perturbation generating process, defending against adversarial attacks is still a challenging task.\nExisting methods of resisting adversarial perturbations perform defense either at training time or inference time. Training time defense methods attempt to increase model capacity to improve adversarial robustness. One of the commonly used methods is adversarial training (Szegedy et al., 2014), in which a mixture of adversarial and clean examples are used to train the neural network. The adversarial training method can be seen as minimizing the worst case loss when the training example is perturbed by an adversary (Goodfellow et al., 2015). Adversarial training requires an adversary to generate adversarial examples in the training procedure. This can significantly increase the training time. Adversarial training also results in reduced performance on clean examples. Lamb et al. (2019) recently introduced interpolated adversarial training (IAT) that incorporates interpolationbased training into the adversarial training framework. The IAT method helps to improve performance on clean examples while maintaining adversarial robustness.\nAs to inference time defense methods, the main idea is to transfer adversarial perturbations such that the obtained inputs are no longer adversarial. Tabacof & Valle (2016) studied the use of random noise such as Gaussian noise and heavytail noise to resist adversarial perturbations. Xie et al. (2018) introduced to apply two randomization operations, i.e., random resizing and random zero padding, to inputs to improve adversarial robustness. Guo et al. (2018) investigated the use of random cropping and rescaling to transfer adversarial perturbations. More recently, Pang et al. (2020) proposed the mixup inference method that uses the interpolation between the input and a randomly selected clean image for inference. This method can shrink adversarial perturbations somewhat by the interpolation operation. Inference time defense methods can be directly applied to offtheshelf network models without retraining or finetuning them. This can be much efficient as compared to training time defense methods.\nThough adversarial perturbations are not readily perceivable by a human observer, it is suggested that adversarial examples are outside the natural image manifold (Hu et al., 2019). Previous studies have suggested that adversarial vulnerability is caused by the locally unstable behavior of classifiers on data manifolds (Fawzi et al., 2016; Pang et al., 2018). Pang et al. (2020) also suggested that adversarial perturbations have the locality property and could be resisted by breaking the locality. Existing inference time defense methods mainly use stochastic transformations such as mixup and random cropping and rescaling to break the locality. In this research, we observe that applying small perturbations generated for nonpredicted class labels to the adversarial example helps to counteract the adversarial effect. Motivated by this observation, we propose a method that employs the use of small perturbations to counteract adversarial perturbations. In the proposed method, we generate small perturbation using local firstorder gradient information for a number of randomly selected class lables. These small perturbations are added together and projected onto a specified space before finally applying to the adversarial example. Our method can be used as a preliminary step before applying existing inference time defense methods.\nTo the best of our knowledge, this is the first research on using local firstorder gradient information to resist adversarial perturbations. Successful attack methods such as projected gradient descent (PGD) (Madry et al., 2018) usually use local gradient to obtain adversarial perturbations. Compared to random transformations, it would be more effective to use local gradient to resist adversarial perturbations. We show through experiments that our method is effective and complementary to random transformationbased methods to improve defense performance.\nThe contributions of this paper can be summarized as follows:\n• We propose a method that uses small firstorder perturbations to defend against adversarial attacks. We show that our method is effective in counteracting adversarial perturbations and improving adversarial robustness.\n• We evaluate our method on CIFAR10 and CIFAR100 against PGD attacks in different settings. The experimental results demonstrate that our method significantly improves the defense performance of the baseline methods against both untargeted and targeted attacks and that it performs well in resisting strong adversarial examples generated using more iterations."
},
{
"heading": "2 PRELIMINARY",
"text": ""
},
{
"heading": "2.1 ADVERSARIAL EXAMPLES",
"text": "We consider a neural network f(·) with parameters θ that outputs a vector of probabilities for L = {1, 2, ..., l} categories. In supervised learning, empirical risk minimization (ERM) (Vapnik, 1998) has been commonly used as the principle to optimize the parameters on a training set. Given an input x, the neural network makes a prediction c(x) = argmaxj∈L fj(x). The prediction is correct if c(x) is the same as the actual target c∗(x).\nUnfortunately, ERM trained neural networks are vulnerable to adversarial examples, inputs formed by applying small but intentionally crafted perturbations (Szegedy et al., 2014; Madry et al., 2018). That is, an adversarial example x′ is close to a clean example x under a distance metric, e.g., ℓ∞ distance, but the neural network outputs an incorrect result for the adversarial example x′ with high\nconfidence. In most cases, the difference between the adversarial example and clean example is not readily recognizable to humans."
},
{
"heading": "2.2 ATTACK METHODS",
"text": "Existing attack methods can be categorized into whitebox attacks and blackbox attacks. We focus on defending against whitebox attacks, wherein the adversary has full access to the network model including the architecture and weights. The fast gradient sign (FGSM) method (Goodfellow et al., 2015) and PGD are two successful optimizationbased attack methods.\nThe FGSM method is a onestep attack method. It generates adversarial perturbations that yield the highest loss increase in the gradient sign direction. Let x be the input to a network model, y the label associate with x and L(θ,x, y) be the loss function for training the neural network. The FGSM method generates a maxnorm constrained perturbation as follows:\nη = εsign(∇xL(θ,x, y)), (1)\nwhere ε denotes the maxnorm. This method was developed based on the view that the primary cause of neural networks’ adversarial vulnerability is their linear nature. The required gradient can be computed efficiently using backpropagation.\nThe PGD method is a multistep attack method that iteratively applies projected gradient descent on the negative loss function (Kurakin et al., 2016) as follows:\nxt+1 = Πx+S(x t + αsign(∇xtL(θ,xt, y))), (2)\nwhere α denotes the step size and Π denotes the projection operator that projects the perturbed input onto x+ S. We consider projecting the perturbed input onto a predefined ℓ∞ ball from the original input. The PGD attack method can be seen as a multistep FGSM method. It is a much strong adversary that reliably causes a variety of neural networks to misclassify their input.\n3 METHODOLOGY\nWhile many studies have been conducted on defending against adversarial attacks at inference time, these studies have not considered using local gradient information to resist adversarial perturbations. Previous work has suggested that the primary cause of neural networks’ adversarial vulnerability is their linear nature (Goodfellow et al., 2015). It would be more effective to use firstorder gradient information to counteract adversarial perturbations such that the resulted perturbations no longer result in the model making an incorrect prediction.\nAdversarial perturbations are small crafted perturbations that slightly affect the visual quality of inputs but cause the neural network to misclassify the inputs in favor of an incorrect answer with high probability. We show that this effect can be counteracted by applying small perturbations generated using local firstorder gradient information for class labels other than the predicted one. An illustration of this phenomenon is shown in Figure 1. We see that by adding perturbations generated for nonpredicted labels to the input, the prediction probability for the correct category increases and that for the incorrect label is suppressed.\nAlgorithm 1 Counteracting adversarial perturbations using local firstorder gradient. Input: Neural network f ; input x; step size α used in PGD to generate perturbations to counteract the adver\nsarial perturbation. Output: Prediction result for x. 1: Randomly select N class labels {l1, l2, ..., lN}; 2: for i = 1 to N do 3: ηi = PGD(li, α, step=1) // generate perturbation ηi for li using the onestep PGD method. 4: end for 5: x = x+ΠC( ∑N i=1 ηi(x)) // C is a ℓ∞ bounded space. 6: return f(x).\nBased on this phenomenon, we propose a method of counteracting adversarial perturbations to improve adversarial robustness. In the proposed method, we generate small perturbations for a number of randomly selected class labels and apply these perturbations to the input to resist the adversarial perturbation. Let x be the input to a model, which can be an adversarial or clean example. We randomly select N class labels and generate small firstorder perturbations for the N selected labels. These N small perturbations are added together and then projected onto a ℓ∞bounded space before applying to the input. This procedure can be formulated as follows:\nx̃ = x+ΠC( N∑ i=1 ηi(x)), (3)\nwhere ηi(x) denotes the small perturbation generated for the ith selected class label, C = {t ∥t− x∥∞ ≤ µ} is a µ bounded ℓ∞ space. The onestep PGD method is used to generate small perturbations. This is the same as using the FGSM method and empirically achieves better performance than using multiple steps. The perturbations can be generated in an untargeted or targeted manner. The combined perturbation is projected onto the space C. This ensures that the obtained example is visually similar to the original one. We detail the procedure for counteracting adversarial perturbations in Algorithm 1.\nDiscussion and Analysis Adversarial examples exposes underlying flaws in the training algorithms. While much progress has been made in defending against adversarial attacks, it is difficult to theoretically understand neural networks’ vulnerability to adversarial examples. Previous work (Athalye et al., 2018) has suggested that the adversarial perturbation δ can be obtained by solving the following optimization problem:\nmin ∥δ∥p , s.t. c(x+ δ) ̸= c∗(x), ∥δ∥p ≤ ξ,\n(4)\nwhere ξ is a hyperparameter constraining the size of the perturbation. This problem can be effectively solved by gradient descentbased attack methods such as PGD and FGSM that reliably cause neural networks to output an incorrect result. These attack methods typically use local firstorder gradient to find the optimal solution. Because stateoftheart neural networks usually have many parameters, perturbations obtained with these attack methods may overfit to the inputs. Therefore, perturbing and transferring these adversarial perturbations could be an effective way to resist the adversarial effect. Unlike previous random transformationbased methods, we employ the use of local firstorder gradient information to counteract the adversarial effect. We show that the proposed method is effective in improving defense performance, especially against strong adversarial examples generated using more iterations.\nLet x0 be a clean example and δ be the adversarial perturbation. In our method, the following input is fed to the neural network:\nx0 + δ · 1z(x0) +ΠC( N∑ i=1 ηi(x0)), where 1z(x0) = { 0, x0 is not subject to adversarial attack, 1, x0 is subject to adversarial attack.\n(5)\nThe perturbation ηi generated to counteract the adversarial perturbation should be small, otherwise it would be a new adversarial perturbation. This would essentially have no effect in counteracting the\nadversarial perturbation. Adversarial training that has been shown to be effective to improve adversarial robustness usually employs a firstorder adversarial like PGD to provide adversarial examples for training. These adversarial examples help to regularize the model to be resistant to adversarial perturbations. We show through experiments that our method is complementary to adversarial training to improve overall defense performance against both untargeted and targeted attacks.\nThe proposed method is applied at inference time. It can be directly applied to offtheshelf models without retraining or finetuning them. The required gradient for generating small perturbations can be computed efficiently in parallel using backpropagation. This would not increase too much time for inference."
},
{
"heading": "4 EXPERIMENTS",
"text": ""
},
{
"heading": "4.1 EXPERIMENTAL SETUP",
"text": "We conduct experiments on CIFAR10 and CIFAR100 (Krizhevsky et al., 2009). ResNet50 (He et al., 2016) is used as the network model. We validate the proposed method on models trained using two methods: Mixup (Zhang et al., 2018) and IAT (Lamb et al., 2019). For fair performance comparison, we follow the same experimental setup as Pang et al. (2020) to train the models. The training procedure is performed for 200 epochs with a batch size of 64. The learning rate is initialized to 0.1 and divided by a factor of 10 at epoch 100 and 150. The values used for interpolation are sampled from Beta(1, 1) for both Mixup and IAT. The ratio between clean examples and adversarial examples used in IAT is set to 1:1. The untargeted PGD10 method with a step size of 2/255 and ε set to 8/255 is used to generate adversarial examples in IAT.\nWe experiment against both untargeted and targeted PGD attacks with different iterations. The values of ε and step size for the PGD attacks are set to 8/255 and 2/255, respectively. The onestep PDG method is used to generate perturbations to resist adversarial perturbations. Unless stated otherwise, perturbations used for defense purposes are generated in a targeted fashion. The step size for the onestep PGD and number of randomly selected class labels are set to 4/255 and 9, respectively. The value of µ is set to 8/255. For each experiment, we run our model for three times and report the mean accuracy. Our method is implemented in Pytorch (Paszke et al., 2017) and all experiments are conducted on one GPU.\nBaselines Three methods that were recently developed for inference time defense are used as baselines. These three methods are Xie et al.’s (2018), Guo et al.’s (2018) and MIOL (mixup inference\nwith nonpredicted labels) (Pang et al., 2020). We compare the performance our method and the baselines and present results of the joint use of our method and the baselines to resist adversarial examples."
},
{
"heading": "4.2 EXPERIMENTAL RESULTS",
"text": "We validate the proposed method against obliviousbox attacks (Carlini & Wagner, 2017). That is the adversary does not know about the existence of the defense mechanism, and adversarial examples are generated only based on targeted network models. We evaluate the performance of defenses on the entire test set. Table 1 and Table 2 report the quantitative results on CIFAR10 and CIFAR100, respectively, demonstrating the effectiveness of the proposed method in improving defense performance. We see from Table 1 that the proposed method significantly helps to improve defense performance of the baseline methods against untageted attacks, achieving at least 12.5% and 4.1% performance gains for Mixup and IAT trained models, respectively. For defending against targeted attacks, the proposed method performs well in combination with Xie et al.’s and Guo et al.’s for Mixup trained models, and it performs well together with Xie et al.’s for IAT trained models. It can be seen from Table 2 that, as with on CIFAR10, the proposed method also helps improve defense performance against untargeted attacks on CIFAR100, achieving at least 6.4% and 1.6% performance improvements for Mixup and IAT trained models, respectively. For defending against targeted attacks, our method consistently helps to improve defense performance when applied on Xie et al.’s and Guo et al.’s methods. We can also make the following three observations from the quantitative results.\n1. In most cases, the proposed method improves defense performance of the baseline methods. Especially for resisting untargeted attacks in different settings, our method significantly helps to improve defense performance. This shows that our method is complementary to the baselines to resist adversarial perturbations. Among the three baseline methods, the joint use of our method with Xie et al.’s and Guo et al.’s methods performs well compared to with the MIOL method. This could be because the perturbation used to counteract adversarial perturbations is reduced due to the interpolation operation in MIOL.\n2. The proposed method performs well against strong PGD attacks with more iterations. Previous studies show that adversarial perturbations generated using more iterations are difficult to resist. The results of the baselines also show that PGD attacks with more iterations result in reduced performance. It is worth noting that the proposed method achieves improved performance for defending against most strong PDG attacks. And for the remaining attacks, the use of more iterations results in\ncomparable performance as the use of less iterations. The results show that adversarial perturbations generated using more iterations can be easily counteracted by using firstorder perturbations.\n3. For defending against targeted PGD50 and PGD200 attacks on CIFAR10, our method together with Guo et al.’s on Mixup trained models achieve higher performance than those obtained on IAT trained models, improving the classification accuracy 1.4% and 3.2%, respectively. Overall, our method together with Guo et al.’s achieve better or comparable performance than pure IAT trained models. As far as we know, we are the first to outperform pure adversarial trainingobtained models using only inference time defense methods. This shows that it is promising that adversarial training could be unnecessary if proper perturbations are applied to adversarial examples.\nNext, we analyse the impact of the step size used in the onestep PGD method on defense performance. We experiment on CIFAR10 and CIFAR100 resisting both untargeted and targeted PGD10 attacks. The experimental results are reported in Figure 2. We see that the step size affects differently for untargeted and targeted attacks. The performance improves as the step size increases from 1 to 8 for untargeted tasks on the two datasets. For targeted attacks, the performance improves as the step size increases from 1 to 4 but starts to reduce or maintain similar performance as the step size further increases.\nWe also analyse the impact of the number of selected class labels in our method on defense performance. Figure 3 demonstrates the results of resisting untargetd and targeted PGD10 attacks on CIFAR10 and CIFAR100. We see that the performance improves for both untargeted and targeted attacks as the number increases from 1 to 9 on CIFAR10. On CIFAR100, the performance also improves as the number increases from 1 to 9 but begins to drop or remain similar as the number further increases.\nDiscussion on type of defense perturbations In our experiments, small perturbations used to counteract the adversarial perturbation are generated in a targeted manner other than for targeted attacks on IAT trained models on CIFAR100, small perturbations are generated in an untargeted manner. Overall, untargeted adversarial perturbations can be effectively counteracted using perturbations\ngenerated in a targeted manner by our method. The results also suggest that adversarial training has an unstable behavior for different data distributions.\nDiscussion on number of steps used to generate defense perturbations The perturbations for defense purposes are generated using the onestep PGD method. We also experiment using multiple steps to generate perturbations for defense purposes. However, we find that this results in reduced performance in defending against adversarial examples. This could be because perturbations generated using multiple steps have adversarial effects and they do not help much to counteract the original adversarial perturbation.\nTo demonstrate the advantage of our method, we further compare the performance of different methods used together with Guo et al.’s. The results of defending against attacks on Mixup trained models are reported in Table 3. We see that although these methods, including Xie et al.’s, MIOL, as well as random rotation and Gaussian noise, are effective in improving performance, out methods outperforms these methods by a large margin, especially when resisting adversarial examples generated using more iterations.\nFinally, we evaluate our method on clean examples. Table 4 compares the performance of our method and the baseline methods. We see that our method performs differently using different types of perturbations that are generated for defense purposes. Our method mostly performs very well on clean inputs compared to the baselines when the perturbations used for defense purposes are generated in an untargeted manner."
},
{
"heading": "5 CONCLUSION",
"text": "We proposed a method of counteracting adversarial perturbations for defending against adversarial attacks. In our method, we generate small perturbations for a number of randomly selected class labels and apply these small perturbations to the input to counteract the adversarial perturbation. Unlike previous methods, our method employs the use of local firstorder gradient for defense purposes and can effectively improve adversarial robustness. Our method is applied at inference time and complementary to the adversarial training method to improve overall defense performance. We experimentally validated our method on CIFAR10 and CIFAR100 against both untargeted and targeted PGD attacks. We presented extensive results demonstrating our method significantly improves the defense performance of the baseline methods. We showed that our method performs well in resisting strong adversarial perturbations generated using more iterations, demonstrating the advantage of using local firstorder gradient to resist adversarial perturbations. Notably, our method together with Guo et al.’s (2018) achieved better performance than those obtained on IAT trained models when resisting targeted PGD50 and PGD200 attacks. This shows that it is promising adversarial training could be unnecessary if proper perturbations are applied to inputs."
},
{
"heading": "A APPENDIX",
"text": "A.1 TRAINING USING MIXUP\nIn the Mixup method (Zhang et al., 2018), neural networks are trained by minimizing the following loss:\nL(f) = 1\nm m∑ i=1 ℓ(f(x̃i, ỹi)), (6)\nwhere ℓ is a loss function that penalizes the difference between the prediction and its actual target, and\nx̃i = λxi + (1− λ)xj , ỹi = λyi + (1− λ)yj .\n(7)\n(xi, yi) and (xj , yj) are randomly sampled from the training data, λ∼Beta(α, α), α ∈ (0,+∞). Training using Mixup empirically improves the generalization performance on clean samples and slightly improves robustness against adversarial examples.\nA.2 ADVERSARIAL TRAINING\nAdversarial training was introduced by Szegedy et al. (2014). In the adversarial training method, a mixture of adversarial and clean examples are used train a neural network. Madry et al. (2018) formulated adversarially robust training of neural networks as the saddle point problem:\nmin θ ρ(θ), where ρ(θ) = E(x,y)∼D [ max δ∈S L(θ, x+ δ, y) ] , (8)\nwhere θ denotes the parameters of the neural network and S is the allowed set for perturbations. The inner maximization problem aims to find an adversarial version of a given data point x that achieves a high loss, while the outer minimization aims to find model parameters such that the adversarial loss given by the inner attack problem is minimized. PGD as a firstorder adversary can reliably solve the inner maximization problem, even though the inner maximization is nonconcave.\nLamb et al. (2019) proposed the interpolated adversarial training (IAT) method that combines Mixup with adversarial training. In the IAT method, the interpolation of adversarial examples and that of clean examples are used for training neural networks. Compared to adversarial training, IAT can achieve high accuracy on clean examples while maintaining adversarial robustness.\nA.3 MORE TECHNICAL DETAILS\nThe hyperparameter settings used in our method on CIFAR10 and CIFAR100 are given in Table 5 and Table 6, respectively."
}
]
 2,020
 
SP:8badc3f75194e9780063af5a2f26448e41e733d4
 [
"The technique is described in sufficient detail and the paper is easy to read. Experimental results involving three datasets: MNIST, street view house numbers, and German traffic signs. The experimental results show that the proposed technique finds significant failures in all datasets, including critical failure scenarios. After correction, the performance of the method improves. "
]
 With the greater proliferation of machine learning models, the imperative of diagnosing and correcting bugs in models has become increasingly clear. As a route to better discover and fix model bugs, we propose failure scenarios: regions on the data manifold that are incorrectly classified by a model. We propose an endtoend debugging framework called Defuse to use these regions for fixing faulty classifier predictions. The Defuse framework works in three steps. First, Defuse identifies many unrestricted adversarial examples—naturally occurring instances that are misclassified—using a generative model. Next, the procedure distills the misclassified data using clustering into failure scenarios. Last, the method corrects model behavior on the distilled scenarios through an optimization based approach. We illustrate the utility of our framework on a variety of image data sets. We find that Defuse identifies and resolves concerning predictions while maintaining model generalization.
 []
 [
{
"authors": [
"Antreas Antoniou",
"Amos Storkey",
"Harrison Edwards"
],
"title": "Data augmentation generative adversarial networks",
"venue": "International Conference on Artificial Neural Networks and Machine Learning,",
"year": 2017
},
{
"authors": [
"Christopher M. Bishop"
],
"title": "Pattern Recognition and Machine Learning (Information Science and Statistics)",
"venue": null,
"year": 2006
},
{
"authors": [
"Serena Booth",
"Yilun Zhou",
"Ankit Shah",
"Julie Shah"
],
"title": "Bayestrex: Model transparency by example",
"venue": null,
"year": 2020
},
{
"authors": [
"L. Engstrom",
"Andrew Ilyas",
"Shibani Santurkar",
"D. Tsipras",
"J. Steinhardt",
"A. Madry"
],
"title": "Identifying statistical bias in dataset replication",
"venue": null,
"year": 2020
},
{
"authors": [
"Michael Feldman",
"Sorelle A. Friedler",
"John Moeller",
"Carlos Scheidegger",
"Suresh Venkatasubramanian"
],
"title": "Certifying and removing disparate impact",
"venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,",
"year": 2015
},
{
"authors": [
"Kaiming He",
"X. Zhang",
"Shaoqing Ren",
"Jian Sun"
],
"title": "Deep residual learning for image recognition",
"venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),",
"year": 2016
},
{
"authors": [
"Irina Higgins",
"Loı̈c Matthey",
"Arka Pal",
"Christopher Burgess",
"Xavier Glorot",
"Matthew M Botvinick",
"Shakir Mohamed",
"Alexander Lerchner"
],
"title": "VAE: Learning basic visual concepts with a constrained variational framework",
"venue": "In ICLR,",
"year": 2017
},
{
"authors": [
"Geoffrey Hinton",
"Oriol Vinyals",
"Jeffrey Dean"
],
"title": "Distilling the knowledge in a neural network",
"venue": "In NeurIPS Deep Learning and Representation Learning Workshop,",
"year": 2014
},
{
"authors": [
"Daniel Kang",
"D. Raghavan",
"Peter Bailis",
"M. Zaharia"
],
"title": "Model assertions for debugging machine learning",
"venue": "Debugging Machine Learning Models,",
"year": 2018
},
{
"authors": [
"Durk P Kingma",
"Shakir Mohamed",
"Danilo Jimenez Rezende",
"Max Welling"
],
"title": "Semisupervised learning with deep generative models",
"venue": "Advances in Neural Information Processing Systems",
"year": 2014
},
{
"authors": [
"Abhishek Kumar",
"Prasanna Sattigeri",
"Tom Fletcher"
],
"title": "Semisupervised learning with gans: Manifold invariance with improved inference",
"venue": "Advances in Neural Information Processing Systems",
"year": 2017
},
{
"authors": [
"Karolina La Fors",
"Bart Custers",
"Esther Keymolen"
],
"title": "Reassessing values for emerging big data technologies: integrating designbased and applicationbased approaches",
"venue": "Ethics and Information Technology,",
"year": 2019
},
{
"authors": [
"Himabindu Lakkaraju",
"Ece Kamar",
"Rich Caruana",
"Jure Leskovec"
],
"title": "Faithful and customizable explanations of black box models",
"venue": "In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society,",
"year": 2019
},
{
"authors": [
"Yann LeCun",
"Corinna Cortes",
"CJ Burges"
],
"title": "Mnist handwritten digit database",
"venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,",
"year": 2010
},
{
"authors": [
"Edo Liberty",
"Zohar Karnin",
"Bing Xiang",
"Laurence Rouesnel",
"Baris Coskun",
"Ramesh Nallapati",
"Julio Delgado",
"Amir Sadoughi",
"Yury Astashonok",
"Piali Das",
"Can Balioglu",
"Saswata Chakravarty",
"Madhav Jha",
"Philip Gautier",
"David Arpin",
"Tim Januschowski",
"Valentin Flunkert",
"Yuyang Wang",
"Jan Gasthaus",
"Lorenzo Stella",
"Syama Rangapuram",
"David Salinas",
"Sebastian Schelter",
"Alex Smola"
],
"title": "Elastic machine learning algorithms in amazon sagemaker",
"venue": null,
"year": 2020
},
{
"authors": [
"Scott M Lundberg",
"SuIn Lee"
],
"title": "A unified approach to interpreting model predictions",
"venue": "Advances in Neural Information Processing Systems",
"year": 2017
},
{
"authors": [
"Stefan Milz",
"Tobias Rudiger",
"Sebastian Suss"
],
"title": "Aerial ganeration: Towards realistic data augmentation using conditional gans",
"venue": "In Proceedings of the European Conference on Computer Vision (ECCV) Workshops,",
"year": 2018
},
{
"authors": [
"Yuval Netzer",
"Tao Wang",
"Adam Coates",
"Alessandro Bissacco",
"Bo Wu",
"Andrew Y. Ng"
],
"title": "Reading digits in natural images with unsupervised feature learning",
"venue": "NIPS Workshop on Deep Learning and Unsupervised Feature Learning,",
"year": 2011
},
{
"authors": [
"Augustus Odena",
"Catherine Olsson",
"David Andersen",
"Ian Goodfellow"
],
"title": "TensorFuzz: Debugging neural networks with coverageguided fuzzing",
"venue": "Proceedings of Machine Learning Research,",
"year": 2019
},
{
"authors": [
"Adam Paszke",
"Sam Gross",
"Soumith Chintala",
"Gregory Chanan",
"Edward Yang",
"Zachary DeVito",
"Zeming Lin",
"Alban Desmaison",
"Luca Antiga",
"Adam Lerer"
],
"title": "Mnist example pytorch",
"venue": null,
"year": 2019
},
{
"authors": [
"F. Pedregosa",
"G. Varoquaux",
"A. Gramfort",
"V. Michel",
"B. Thirion",
"O. Grisel",
"M. Blondel",
"P. Prettenhofer",
"R. Weiss",
"V. Dubourg",
"J. Vanderplas",
"A. Passos",
"D. Cournapeau",
"M. Brucher",
"M. Perrot",
"E. Duchesnay"
],
"title": "Scikitlearn: Machine learning in Python",
"venue": "Journal of Machine Learning Research,",
"year": 2011
},
{
"authors": [
"Pranav Rajpurkar",
"Jian Zhang",
"Konstantin Lopyrev",
"Percy Liang"
],
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,",
"year": 2016
},
{
"authors": [
"Benjamin Recht",
"Rebecca Roelofs",
"Ludwig Schmidt",
"Vaishaal Shankar"
],
"title": "Do ImageNet classifiers generalize to ImageNet? volume",
"venue": "Proceedings of Machine Learning Research,",
"year": 2019
},
{
"authors": [
"Marco Tulio Ribeiro",
"Sameer Singh",
"Carlos Guestrin"
],
"title": " why should i trust you?” explaining the predictions of any classifier",
"venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,",
"year": 2016
},
{
"authors": [
"Marco Tulio Ribeiro",
"Tongshuang Wu",
"Carlos Guestrin",
"Sameer Singh"
],
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4902–4912,",
"year": 2020
},
{
"authors": [
"Veit Sandfort",
"Ke Yan",
"Perry J. Pickhardt",
"Ronald M. Summers"
],
"title": "Data augmentation using generative adversarial networks (cyclegan) to improve generalizability in ct segmentation tasks",
"venue": "Scientific Reports,",
"year": 2019
},
{
"authors": [
"Julien Simon"
],
"title": "Amazon sagemaker model monitor – fully managed automatic monitoring for your machine learning models",
"venue": "AWS News Blog,",
"year": 2019
},
{
"authors": [
"Dylan Slack",
"Sorelle A. Friedler",
"Emile Givental"
],
"title": "Fairness warnings and fairmaml: Learning fairly with minimal data",
"venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*),",
"year": 2020
},
{
"authors": [
"Dylan Slack",
"Sophie Hilgard",
"Sameer Singh",
"Himabindu Lakkaraju"
],
"title": "How much should i trust you? modeling uncertainty of black box explanations. AIES, 2020b",
"venue": null,
"year": 2020
},
{
"authors": [
"Yang Song",
"Rui Shu",
"Nate Kushman",
"Stefano Ermon"
],
"title": "Constructing unrestricted adversarial examples with generative models",
"venue": "Advances in Neural Information Processing Systems",
"year": 2018
},
{
"authors": [
"Johannes Stallkamp",
"Marc Schlipsing",
"Jan Salmen",
"Christian Igel"
],
"title": "The German Traffic Sign Recognition Benchmark: A multiclass classification competition",
"venue": "In IEEE International Joint Conference on Neural Networks,",
"year": 2011
},
{
"authors": [
"Erik B. Sudderth"
],
"title": "Graphical models for visual object recognition and tracking",
"venue": null,
"year": 2006
},
{
"authors": [
"P. Varma",
"Bryan He",
"Dan Iter",
"Peng Xu",
"R. Yu",
"C.D. Sa",
"Christopher Ré"
],
"title": "Socratic learning: Augmenting generative models to incorporate latent subsets in training data",
"venue": "arXiv: Learning,",
"year": 2016
},
{
"authors": [
"Paroma Varma",
"Dan Iter",
"Christopher De Sa",
"Christopher Ré"
],
"title": "Flipper: A systematic approach to debugging training sets. In Proceedings of the 2nd Workshop on HumanIntheLoop Data Analytics, HILDA’17, New York, NY, USA, 2017",
"venue": "Association for Computing Machinery. ISBN 9781450350297",
"year": 2017
},
{
"authors": [
"Tongshuang Wu",
"Marco Tulio Ribeiro",
"Jeffrey Heer",
"Daniel Weld"
],
"title": "Errudite: Scalable, reproducible, and testable error analysis",
"venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,",
"year": 2019
},
{
"authors": [
"Hongyang Zhang",
"Yaodong Yu",
"Jiantao Jiao",
"Eric Xing",
"Laurent El Ghaoui",
"Michael Jordan"
],
"title": "Theoretically principled tradeoff between robustness and accuracy. volume",
"venue": "Proceedings of Machine Learning Research,",
"year": 2019
},
{
"authors": [
"Xuezhou Zhang",
"Xiaojin Zhu",
"Stephen J. Wright"
],
"title": "Training set debugging using trusted items",
"venue": "In AAAI,",
"year": 2018
},
{
"authors": [
"Zhengli Zhao",
"Dheeru Dua",
"Sameer Singh"
],
"title": "Generating natural adversarial examples",
"venue": "In International Conference on Learning Representations (ICLR),",
"year": 2018
},
{
"authors": [
"Pedregosa"
],
"title": "We run our experiments with the default parameters and full",
"venue": null,
"year": 2011
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Debugging machine learning (ML) models is a critical part of the ML development life cycle. Uncovering bugs helps ML developers make important decisions about both development and deployment. In practice, much of debugging uses aggregate test statistics (like those in leader board style challenges [Rajpurkar et al. (2016)]) and continuous evaluation and monitoring post deployment [Liberty et al. (2020), Simon (2019)]. However, additional issues arise with overreliance on test statistics. For instance, aggregate statistics like held out test accuracy are known to overestimate generalization performance [Recht et al. (2019)]. Further, statistics offer little insight nor remedy for specific model failures [Ribeiro et al. (2020); Wu et al. (2019)]. Last, reactive debugging of failures as they occur in production does little to mitigate harmful user experiences [La Fors et al. (2019)]. Several techniques exist for identifying undesirable behavior in machine learning models. These methods include explanations [Ribeiro et al. (2016); Slack et al. (2020b); Lakkaraju et al. (2019); Lundberg & Lee (2017)], fairness metrics [Feldman et al. (2015), Slack et al. (2020a)], data set replication [Recht et al. (2019); Engstrom et al. (2020)], and behavioral testing tools [Ribeiro et al. (2020)]. However, these techniques do not provide methods to remedy model bugs or require a high level of human supervision. To enable model designers to discover and correct model bugs beyond aggregate test statistics, we analyze unrestricted adversarial examples: instances on the data manifold that are misclassified [Song et al. (2018)]. We identify model bugs through diagnosing common patterns in unrestricted adversarial examples.\nIn this work, we propose Defuse: a technique for debugging classifiers through distilling1 unrestricted adversarial examples. Defuse works in three steps. First, Defuse identifies unrestricted adversarial examples by making small, semantically meaningful changes to input data using a variational autoencoder (VAE). If the classifier prediction deviates from the ground truth label on the altered instance, it returns the data instance as a potential model failure. This method employs similar techniques from [Zhao et al. (2018)]. Namely, small perturbations in the latent space of generative models can produce images that are misclassified. Second, Defuse distills the changes through clustering on the unrestricted adversarial example’s latent codes. In this way, Defuse diagnoses regions in the latent space that are problematic for the classifier. This method produces a set of\n1We mean distilling in the sense of “to extract the most important aspects of” and do not intend to invoke the knowledge distillation literature [Hinton et al. (2014)].\nclusters in the latent space where it is likely to find misclassified data. We call these localities failure scenarios. An annotator reviews the failure scenarios and assigns the correct label— one label per scenario. Third, Defuse corrects the model behavior on the discovered failure scenarios through optimization. Because we use a generative clustering model to describe the failure scenarios, we sample many unrestricted adversarial examples and finetune to fix the classifier. Critically, failure scenarios are highly useful for model debugging because they reveal high level patterns in the way the model fails. By understanding these consistent trends in model failures, model designers can more effectively understand problematic deployment scenarios for their models.\nTo illustrate the usefulness of failure scenarios, we run Defuse on a classifier trained on MNIST and provide an overview in figure 1. In the identification step (first pane in figure 1), Defuse generates unrestricted adversarial examples for the model. The red number in the upper right hand corner of the image is the classifier’s prediction. Although the classifier achieves high test set performance, we find naturally occurring examples that are classified incorrectly. Next, the method performs the distillation step (second pane in figure 1). The clustering model groups together similar failures for annotator labeling. We see that similar mistakes are grouped together. For instance, Defuse groups together a similar style of incorrectly classified eights in the first row of the second pane in figure 1. Next, Defuse receives annotator labels for each of the clusters.2 Last, we run the correction step using both the annotator labeled data and the original training data. We see that the model correctly classifies the images (third pane in figure 1). Importantly, the model maintains its predictive performance, scoring 99.1% accuracy after tuning. We see that Defuse enables model designers to both discover and correct naturally occurring model failures.\nWe provide the necessary background in Defuse (§2). Next, we detail the three steps in Defuse: identification, distillation, and correction (§3). We then demonstrate the usefulness of Defuse on three image data sets: MNIST [LeCun et al. (2010)], the German traffic signs data set [Stallkamp et al. (2011)], and the Street view house numbers data set [Netzer et al. (2011)], and find that Defuse discovers and resolves critical bugs in high performance classifiers trained on these datasets (§4)."
},
{
"heading": "2 NOTATION AND BACKGROUND",
"text": "In this section, we establish notation and background on unrestricted adversarial examples. Though unrestricted adversarial examples can be found in many domains, we focus on Defuse applied to image classification.\n2We assign label 8 to the first row in the second pane of figure 1, label 0 to the second row, and label 6 to the third row.\nUnrestricted adversarial examples Let f : RN ! [0, 1]C denote a classifier that accepts a data point x 2 X , where X is the set of legitimate images. The classifier f returns the probability that x belongs to class c 2 {1, ..., C}. Next, assume f is trained on a data set D consisting of d tuples (x, y) containing data point x and ground truth label y using loss function L. Finally, suppose there exists an oracle o : x 2 X ! {1, ..., C} that outputs a label for x. We define unrestricted adversarial examples as the set AN := {x 2 X  o(x) 6= f(x)} [Song et al. (2018)].\nVariational Autoencoders (VAEs) In order to discover unrestricted adversarial examples, it is necessary to model the set of legitimate images. We use a VAE to create such a model. A VAE is composed of an encoder and a decoder neural networks. These networks are used to model the relationship between data x and latent factors z 2 RK . Where x is generated by some ground truth latent factors v 2 RM , we wish to train a model such that the learned generative factors closely resemble the true factors: p(xv) ⇡ p(xz). In order to train such a model, we employ the VAE [Higgins et al. (2017)]. This technique produces encoder q (zx) that maps from the data and latent codes and decoder p✓(xz) that maps from codes to data."
},
{
"heading": "3 METHODS",
"text": ""
},
{
"heading": "3.1 FAILURE SCENARIOS",
"text": "We begin by formalizing our notion of failure scenarios. Let z 2 RK be the latent codes corresponding to image x 2 X and q (·) : x ! z be the encoder mapping the relationship between images and latent codes. Definition 3.1. Failure scenario. Given a constant ✏ > 0, vector norm  · , and point z0, a failure scenario is a set of images AR = {x 2 X  ✏ > q (x) z0 ^ o(x) 6= f(x)}.\nPrevious works that investigate unrestricted adversarial examples look for specific instances where the oracle and the model disagree [Song et al. (2018); Zhao et al. (2018)]. We instead look for regions in the latent space where this is the case. Because the latent space of the VAE tends to take on Gaussian form due to the prior, we can use euclidean distance to define these regions. If we were to define failure scenarios on the original data manifold, we may need a much more complex distance function. Because it is likely too strict to assume the oracle and model disagree on every instance in such a region, we also introduce a relaxation. Definition 3.2. Relaxed failure scenario. Given a constant ✏ > 0, vector norm  · , point z0, and threshold ⇢, a relaxed failure scenario is a set of images Af = {x 2 X  ✏ > q (x) z0} such that {x 2 Af  o(x) 6= f(x)} / Af  > ⇢.\nIn this work, we adopt the latter definition of failure scenarios. To concretize failure scenarios and provide evidence for their existence, we continue our MNIST example from figure 1. We plot the tSNE embeddings of the latent codes of 10000 images from the training set and 516 unrestricted\nadversarial examples created during the identification step in figure 2 (details of how we generate unrestricted adversarial examples in section 3.2.1). We see that the unrestricted adversarial examples are from similar regions in the latent space."
},
{
"heading": "3.2 DEFUSE",
"text": "In this section, we introduce Defuse: our procedure for identifying and correcting classifier performance on failure scenarios. First, we explain how we identity unrestricted adversarial examples using VAEs. Next, we describe our clustering approach that distills these instances into failure scenarios. Last, we introduce our approach to correct classifier predictions on the failure scenarios."
},
{
"heading": "3.2.1 IDENTIFYING UNRESTRICTED ADVERSARIAL EXAMPLES",
"text": "This section describes the identification step in Defuse (first pane in figure 1). The aim of the identification step is to generate many unrestricted adversarial examples. In essence, we encode all the images from the training data. We perturb the latent codes with a small amount of noise drawn from a Beta distribution. We save instances that are classified differently from ground truth by f when decoded. By perturbing the latent codes with a small amount of noise, we expect the decoded instances to have small but semantically meaningful differences from the original instances. Thus, if the classifier prediction deviates on the perturbation the instance is likely misclassified. We denote the set of unrestricted adversarial examples for a single instance . We generate unrestricted adversarial examples over each instance x 2 X producing a set of unrestricted adversarial containing the produced for each instance x. Pseudo code of the algorithm for generating a single unrestricted adversarial example is given in algorithm 1 in appendix A.\nOur technique is related to the method for generating natural adversarial examples from [Zhao et al. (2018)] — a very similar but slightly different concept from unrestricted adversarial examples. The authors use a similar stochastic search method in the latent space of a GAN. They start with a small amount of noise and increase magnitude of the noise until they find a unrestricted adversarial example. Thus, they save only the unrestricted adversarial examples which are minimally distant from a data point. They also save images that differ in prediction from the original decoded instance. Because we iterate over the entire data set, it is simpler to keep the level of noise fixed and sample a predetermined number of times. In addition, we save images that differ in ground truth label from the original decoded instance because we seek to debug a classifier. Meaning, if the original instance is misclassified we wish to save this instance as a model failure."
},
{
"heading": "3.2.2 DISTILLING FAILURE SCENARIOS",
"text": "This section describes the distillation step in defuse (second pane of figure 1). The goal of the distillation step is to cluster the latent codes of the set of unrestricted adversarial examples in order to diagnose failure scenarios. We require our clustering method to (1) infer the correct number of clusters from the data and (2) be capable of generating instances of each cluster. We need to infer the number of clusters from the data because the number of failure scenarios are unknown ahead of time. Further, we must be capable of generating many instances from each cluster so that we have enough data to finetune on in order to correct the faulty model behavior. In addition, generating many failure instances enables model designers to see numerous examples from the failure scenarios, which encourages understanding of the model failure modes. Though any such clustering method that fits this description could be used for distillation, we use a Gaussian mixture model (GMM) with Dirichlet process prior. We use the Dirichlet process because it nicely describes the clustering problem where the number of mixtures is unknown before hand, fulfilling our first criteria [Sudderth (2006)]. Additionally, because the model is generative, we can sample new instances, which satisfies our second criteria.\nIn pratice, we use the truncated stick breaking construction of the dirchlet process, where K is the upper bound of the number of mixtures. The truncated stick breaking construction simplifies inference making computation more efficient [Sudderth (2006)]. The method outputs a set of clusters ✓j = (µj , j ,⇡j) where j 2 {1, ...,K}. The parameters µ and describe the mean and variance of a multivariate normal distribution and ⇡ indicates the cluster weight. To perform inference on the model, we employ expectation maximization (EM) described in [Bishop (2006)] and use the implementation provided in [Pedregosa et al. (2011)]. Once we run EM and determine the parameter\nvalues, we throw away cluster components that are not used by the model. We fix some small ✏ and define the set of failure scenarios ⇤ generated at the distillation step as: ⇤ := {(µj ,⌃j ,⇡j)⇡j > ✏}."
},
{
"heading": "3.2.3 CORRECTING FAILURE SCENARIOS",
"text": "Labeling First, an annotator assigns the correct label to the failure scenarios. For each failure scenario identified in ⇤, we sample Q latent codes from z ⇠ N (µj , ⌧ · j). Here, ⌧ 2 R is a hyperparameter that controls the diversity of samples from the failure scenario. Because it could be possible for multiple ground truth classes to be present in a failure scenario, we set this parameter tight enough such that the sampled instances are from the same class to make labeling easier. We reconstruct the latent codes using the decoder p✓(xz). Next, an annotator reviews the reconstructed instances from the scenario and decides whether the scenario constitutes a model failure. If so, the annotator assigns the correct label to all of the instances. The correct label constitutes a single label for all of the instances generated from the scenario. We repeat this process for each of the scenarios identified in ⇤ and produce a dataset of failure instances Df . Pseudo code for the procedure is given in algorithm 2 in appendix A.\nFinetuning We finetune on the training data with an additional regularization term to fix the classifier performance on the failure scenarios. The regularization term is the cross entropy loss between the identified failure scenarios and the annotator label. Where CE is the cross entropy loss applied to the failure instances Df and is the hyperparameter for the regularization term, we optimize the following objective using gradient descent: F(D,Df ) = L(D) + · CE(Df ). This objective encourages the model to maintain its predictive performance on the original training data while en\ncouraging the model to predict the failure instances correctly. The regularization term controls the pressure applied to the model to classify the failure instances correctly."
},
{
"heading": "4 EXPERIMENTS",
"text": ""
},
{
"heading": "4.1 SETUP",
"text": "Datasets We evaluate Defuse on three datasets: MNIST [LeCun et al. (2010)], the German Traffic Signs dataset [Stallkamp et al. (2011)], and the Street view house numbers dataset (SVHN) [Netzer et al. (2011)]. MNIST consists of 60, 000 32X32 handwritten digits for training and 10, 000 digits for testing. The images are labeled corresponding to the digits 0 9. The German traffic signs data set includes 26, 640 training and 12, 630 testing images of size 128X128. We randomly split the testing data in half to produce a validation and testing set. The images are labeled from 43 different classes to indicate the type of traffic signs. The SVHN data set consists of 73, 257 training and 26, 032 testing images of size 32X32. The images include digits of house numbers from Google streetview with labels 0 9. We split the testing set in half to produce a validation and testing set. Models On MNIST, we train a CNN scoring 98.3% test set accuracy following the architecture from [Paszke et al. (2019)]. On German traffic signs and SVHN, we finetune a Resnet18 model pretrained on ImageNet [He et al. (2016)]. The German signs and SVHM models score 98.7% and 93.2% test accuracy respectively. We train a VAE on all available data from each data set to model the set of legitimate images in Defuse. We use an Amazon EC2 P3 instance with a single NVIDIA Tesla V100 GPU for training. We follow similar architectures to [Higgins et al. (2017)]. We set the size of the latent dimension z to 10 for MNIST/SVHN and 15 for German signs. We provide our VAE architectures in appendix B.\nDefuse In the identification step, we fix the parameters of the Beta distribution noise a and b to a = b = 50.0 for MNIST and a = b = 75.0 for SVHN and German signs. We found these parameters were good choices because they produce a very small amount of perturbation noise making the decoded instance only slightly different than the original instance. During distillation, we set the upper bound on the number of components K to 100. We generally found the actual number of clusters to be much lower than this level. Thus, this serves as an appropriate upper bound. We also fixed the weight threshold for clusters ✏ to 0.01 during distillation in order to remove clusters with very low weighting. We additionally randomly down sample the number of unrestricted adversarial examples to 50, 000 to make inference of the GMM more efficient. For correction, we sample finetuning and testing sets consisting of 256 images each from every failure scenario. This number of samples captures the breadth of possible images in the scenario, so it is appropriate for tuning and evaluation. We use the finetuning set as the set of failure instances Df . We use the test set as held out data for evaluating classifier performance on the failure scenarios after correction. During sampling, we fix the sample diversity ⌧ to 0.5 for MNIST and 0.01 for SVHN and German signs because the samples from each of the failure scenarios appear to be in the same class using these values. We finetune over a range of ’s in order to find the best balance between training and failure scenario data. We use 3 epochs for MNIST and 5 for both SVHN and German Signs because training converged within both these time frames. During finetuning, we select the model for each according to the highest training set accuracy for MNIST or validation set accuracy for SVHM and German traffic signs at the end of each finetuning epoch. We select the best model overall as the highest training or validation performance over all ’s.\nAnnotator Labeling Because Defuse requires human supervision, we use Amazon Sagemaker Ground Truth to both determine whether clusters generated in the distillation step are failure scenarios and to generate their correct label. In order to determine whether clusters are failure scenarios, we sample 10 instances from each cluster in the distillation step. It is usually apparent the classifier disagrees with many of the ground truth labels within 10 instances, and thus it is appropriate to label the cluster as a failure scenario. For example, in figure 3 it is generally clear the classifier incorrectly predicts the data within only a few examples. As such, 10 instances is a reasonable choice. To reduce noise in the annotation process, we assign the same image to 5 different workers and take the majority annotated label as ground truth. The workers label the images using an interface that includes a single image and the possible labels for that task. We additionally instruct workers to select “None of the above” if the image does not belong to any class and discard these labels. For instance, the MNIST interface includes a single image and buttons for the digits 0 9 along with a\n“None of the above” button. We provide a screen shot of this interface in figure 14. If more than half (i.e. setting ⇢ = 0.5) of worker labeled instances disagree with the classifier predictions on the 10 instances, we call the cluster a failure scenario. We chose ⇢ = 0.5 because clusters are highly dense with incorrect predictions at this level, making them useful for both understanding model failures and worthwhile for correction. We take the majority prediction over each of the 10 ground truth labels as the label for the failure scenario. As an exception, annotating the German traffic signs data requires specific knowledge of traffic signs. The German traffic signs data ranges across 43 different types of traffic signs. It is not reasonable to assume annotators have enough familiarity with this data and can label it accurately. For this data set, we, the authors, reviewed the distilled clusters and determined which clusters constituted failure scenarios. We labeled the cluster a failure scenario if half the instances appeared to be misclassified."
},
{
"heading": "4.2 ILLUSTRATIVE FAILURE SCENARIO EXAMPLES",
"text": "We demonstrate the potential of Defuse for identifying critical model bugs. We review failure scenarios produced in the three datasets we consider. All together, Defuse produces 19 failure scenarios for MNIST, 6 for SVHN, and 8 for German signs. For each dataset, we provide samples from three failure scenarios in figure 3. The failure scenarios include numerous mislabeled examples. Each failure scenario is composed of mislabeled examples of a similar style. For example, in MNIST, the failure scenario in the upper left hand corner of figure 3 includes a similar style of 4’s that are generally predicted incorrectly. The same is true for the failure scenarios in the center and right column where a certain style of 2’s and 6’s are mistaken. The failure scenarios generally include images which seem difficult to classify. For instance, the misclassified 6’s are quite thin making them appear like 1’s in some cases. There are similar trends in SVHN and German Signs. In SVHN, particular types of 5’s and 8’s are misclassified. The same is true in German signs where styles of 50km/h and 30km/h signs are predicted incorrectly. Generally, these methods reveal important bugs in each of the classifiers. It is clear from the MNIST example for instance that very skinny 6’s are challenging for the classifier to predict correctly. Further, the German signs classifier has a difficult time with 50km/h signs and tends to frequently mistake them as 80km/h. We provide further samples from other failure scenarios in appendix D. These results clearly demonstrate Defuse reveals insightful model bugs which are useful for model designers to understand."
},
{
"heading": "4.3 CORRECTING FAILURE SCENARIOS",
"text": "We show that Defuse resolves the failure scenarios while maintaining model generalization on the test set. To perform this analysis, we assess accuracy on both the failure scenario test data and test set after correction. It is important for classifier accuracy to improve on the failure scenario data in order\nto correct the bugs discovered while running Defuse. At the same time, the classifier accuracy on the test set should stay at a similar level or improve indicating that model generalization according to the test set is still strong. We compare Defuse against finetuning only on the unrestricted adversarial examples labeled by annotators. We expect this baseline to be reasonable because related works which focus on robustness to classic adversarial attacks demonstrate that tuning directly on the adversarial examples is effective [Zhang et al. (2019)]. We finetune on the unrestricted adversarial examples sweeping over a range of different ’s in the same way as Defuse described in section 4.1. We use this baseline for MNIST and SVHN and not German signs because we, the authors, assigned the failure scenarios for this data set. Thus, we do not have ground truth labels for unrestricted adversarial examples.\nWe provide an overview of the models before finetuning, finetuning with the unrestricted adversarial examples, and using Defuse in figure 4. Defuse scores highly on the failure scenario data after correction compared to before finetuning. There is only marginal improvement finetuning on the unrestricted adversarial examples. These results indicate Defuse corrects the faulty model performance on the identified failure scenarios. Further, we see the clustering step in Defuse is critical to its success because of the technique’s superior performance compared to finetuning on the unrestricted adversarial examples. In addition, there are minor effects on test set performance during finetuning. The test set accuracy increases slightly for MNIST and decreases marginally for SVHN and German Signs for both tuning on the unrestricted adversarial examples and using Defuse. Though the test set performance changes marginally, the increased performance on the failure scenarios demonstrates Defuse’s capacity to correct important model errors. Further, we plot the relationship between test set accuracy and failure scenario test accuracy in figure 5. We generally see there is an appropriate for each model where there is both high test set performance and accuracy on the failure scenarios. All in all, these results indicate Defuse serves as an effective method for correcting specific cases of faulty classifier performance while maintaining model generalization."
},
{
"heading": "4.4 ANNOTATOR AGREEMENT",
"text": "Because we rely on annotators to provide the ground truth labels for the unrestricted adversarial examples, we investigate the agreement between the annotators during labeling. It is important for the annotators to agree on the labels for the unrestricted adversarial examples so that we can have high confidence our evaluation is based on accurately labeled data. We evaluate the annotator agreement through assessing the percent of annotators that voted for the majority label prediction in an unrestricted adversarial example across all the annotated examples. This metric will be high when the annotators are in consensus and low when only a few annotators constitute the majority vote. We provide the annotator agreement on MNIST and SVHN in figure 6 broken down into failure scenario data, nonfailure scenario data, and their combination. Interestingly, the failure scenario data has slightly lower annotator agreement indicating these tend to be more ambiguous examples. Further, there is lower agreement on SVHN than MNIST, likely because this data is more complex. All in all, there is generally high annotator agreement across all the data."
},
{
"heading": "5 RELATED WORK",
"text": "A number of related approaches for improving classifier performance use data created from generative models — mostly generative adversarial networks (GANs) [Sandfort et al. (2019); Milz et al. (2018); Antoniou et al. (2017)]. These methods use GANs to generate instances from classes that are underrepresented in the training data to improve generalization performance. Additional methods use generative models for semisupervised learning [Kingma et al. (2014); Varma et al. (2016); Kumar et al. (2017); Dumoulin et al. (2016)]. Though these methods are similar in nature to the correction step of our work, a key difference is Defuse focuses on summarizing and presenting high level model failures. Also, [Varma et al. (2017)] provide a system to debug data generated from a GAN when the training set may be inaccurate. Though similar, we ultimately use a generative model to debug a classifier and do not focus on the generative model itself. Last, similar to [Song et al. (2018), Zhao et al. (2018)], [Booth et al. (2020)] provide a method to generate highly confident misclassified instances.\nRelated to debugging models, [Kang et al. (2018)] focus on model assertions that flag failures during production. Also, [Zhang et al. (2018)] investigate debugging the training set for incorrectly labeled instances. We focus on preemptively identifying model bugs and do not focus on incorrectly labeled test set instances. Additionally, [Ribeiro et al. (2020)] propose a set of behavioral testing tools that help model designers find bugs in NLP models. This technique requires a high level of supervision and thus might not be appropraite in some settings. Last, [Odena et al. (2019)] provide a technique to debug neural networks through perturbing data inputs with various types of noise. By leveraging unrestricted adversarial examples, we distill high level patterns in critical and naturally occurring model bugs. This technique requires minimal human supervision while presenting important types of model errors to designers."
},
{
"heading": "6 CONCLUSION",
"text": "In this paper, we present Defuse: a method that generates and aggregates unrestricted adversarial examples to debug classifiers. Though unrestricted adversarial examples have been proposed in previous works, we harness such examples for the purpose of debugging classifiers. We accomplish this task through identifying failure scenarios: regions in the latent space of a VAE with many unrestricted adversarial examples. On a variety of data sets, we find that samples from failure scenarios are useful in a number of ways. First, failure scenarios are informative for understanding the ways certain models fail. Second, the generative aspect of failure scenarios is very useful for correcting failure scenarios. In our experimental results, we show that these failure scenarios include critical model issues for classifiers with real world impacts — i.e. traffic sign classification — and verify our results using ground truth annotator labels. We demonstrate that Defuse successfully resolves these issues. Although Defuse identifies important errors in classifiers, the technique requires a minimal level of human supervision. Namely, the failure scenarios must be reviewed before correction. In the future, it will be crucial to investigate automatic ways of reviewing failure scenarios."
},
{
"heading": "A DEFUSE PSUEDO CODE",
"text": "In algorithm 2, Correct(·) and Label(·) are the steps where the annotator decides if the scenario warrants correction and the annotator label for the failure scenario.\nAlgorithm 1 Identification Step 1: procedure IDENTIFY(f, p, q, x, y, a, b) 2: := {} 3: µ, := q (x) 4: for i 2 {1, ..., Q} do 5: ✏ := [Beta(a, b)1, 6: ...,Beta(a, b)M ] 7: xdecoded := p✓(µ+ ✏) 8: if y 6= f(xdecoded) then 9: := [ xdecoded 10: end if 11: end for 12: Return 13: end procedure\nAlgorithm 2 Labeling Step 1: procedure LABEL SCENARIOS(Q,⇤, p, q, ⌧ ) 2: Df := {} 3: for (µ, ,⇡) 2 ⇤ do 4: Xd := {} 5: for i 2 {1, .., Q} do 6: Xd := Xd [ q (N (µ, ⌧ · )) 7: end for 8: if Correct(Xd) then 9: Df := Df [ {Xd,Label(Xd)} 10: end if 11: end for 12: Return S Df 13: end procedure"
},
{
"heading": "B TRAINING DETAILS",
"text": "B.1 GMM DETAILS\nIn all experiments, we use the implementation of Gaussian mixture model with dirichlet process prior from [Pedregosa et al. (2011)]. We run our experiments with the default parameters and full component covariance.\nB.2 MNIST DETAILS\nModel details We train a CNN on the MNIST data set using the architecture in figure 7. We used the Adadelta optimizer with the learning rate set to 1. We trained for 5 epochs with a batch size of 64.\nVAE training details We train a VAE on MNIST using the architectures in figure 8 and 9. We set to 4. We trained for 800 epochs using the Adam optimizer with a learning rate of 0.001, a minibatch size of 2048, and set to 0.4. We also applied a linear annealing schedule on the KLDivergence for 500 optimization steps. We set z to have 10 dimensions.\nIdentification We performed identification with Q set to 500. We set a and b both to 50. We ran identification over the entire training set. Last, we limited the max allowable size of to 100.\nDistillation We ran the distillation step setting K, the upper bound on the number of mixtures, to 100. We fixed ✏ to 0.01 and discarded clusters with mixing proportions less than this value. This left 44 possible scenarios. We set ⌧ to 0.5 during review. We used Amazon Sagemaker Ground Truth\nto determine failure scenarios and labels. The labeling procedure is described in section 4.1. This produced 19 failure scenarios.\nCorrection We sampled 256 images from each of the failure scenarios for both finetuning and testing. We finetuned with minibatch size of 256, the Adam optimizer, and learning rate set to 0.001. We swept over a range of correction regularization ’s consisting of [1e 10, 1e 9, 1e 8, 1e 7, 1e 6, 1e 5, 1e 4, 1e 3, 1e 2, 1e 1, 1, 2, 5, 10, 20, 100, 1000] and finetuned for 3 epochs on each.\nB.3 GERMAN SIGNS DATASET DETAILS\nDataset The data consists of 26640 training images and 12630 testing images consisting of 43 different types of traffic signs. We randomly split the testing data in half to produce 6315 testing and validation images. Additionally, we resize the images to 128x128 pixels.\nClassifier f We finetuned the ResNet18 model for 20 epochs using Adam with the cross entropy loss, learning rate of 0.001, batch size of 256 on the training data set, and assessed the validation accuracy at the end of each epoch. We saved the model with the highest validation accuracy.\nVAE training details We trained for 800 epochs using the Adam optimizer with a learning rate of 0.001, a minibatch size of 2048, and set to 4. We also applied a linear annealing schedule on the KLDivergence for 500 optimization steps. We set z to have 15 dimensions.\nIdentification We performed identification with Q set to 100. We set a and b both to 75.\nDistillation We ran the distillation step setting K to 100. We fixed ✏ to 0.01 and discarded clusters with mixing proportions less than this value. This left 38 possible scenarios. We set ⌧ to 0.01 during review. We determined 8 of these scenarios were particularly concerning.\nCorrection We finetuned with minibatch size of 256, the Adam optimizer, and learning rate set to 0.001. We swept over a range of correction regularization ’s consisting of [1e 10, 1e 9, 1e 8, 1e 7, 1e 6, 1e 5, 1e 4, 1e 3, 1e 2, 1e 1, 1, 2, 5, 10, 20, 100, 1000] and finetuned for 5 epochs on each.\nB.4 SVHN DETAILS\nDataset The data set consists of 73257 training and 26032 testing images. We also randomly split the testing data to create a validation data set. Thus, the final validation and testing set correspond to 13016 images each.\nClassifier f We fine tuned for 10 epochs using the Adam optimizer, learning rate set to 0.001, and a batch size of 2048. We chose the model which scored the best validation accuracy when measured at the end of each epoch.\nVAE training details We trained the VAE for 400 epochs using the Adam optimizer, learning rate 0.001, and minibatch size of 2048. We set to 4 and applied a linear annealing schedule on the KlDivergence for 5000 optimization steps. We set z to have 10 dimensions.\nIdentification We set Q to 100. We also set the maximum size of to 10. We set a and b to 75.\nDistillation We set K to 100. We fixed ✏ to 0.01. The distillation step identified 32 plausible failure scenarios. The annotators deemed 6 of these to be failure scenarios. We set ⌧ to 0.01 during review.\nCorrection We set the minibatch size of 2048, the Adam optimizer, and learning rate set to 0.001. We considered a range of ’s: [1e 10, 1e 9, 1e 8, 1e 7, 1e 6, 1e 5, 1e 4, 1e 3, 1e 2, 1e 1, 1, 2, 5, 10, 20, 100, 1000]. We finetuned for 5 epochs.\nB.5 TSNE EXAMPLE DETAILS\nWe run tSNE on 10, 000 examples from the training data and 516 unrestricted adversarial examples setting perplexity to 30. For the sake of clarity, we do not include outliers from the unrestricted adversarial examples. Namely, we only include unrestricted adversarial examples with > 1% probability of being in one of the MNIST failure scenario clusters."
},
{
"heading": "C ANNOTATOR INTERFACE",
"text": "We provide a screenshot of the annotator interface in figure 14."
},
{
"heading": "D ADDITIONAL EXPERIMENTAL RESULTS",
"text": "D.1 ADDITIONAL SAMPLES FROM MNIST FAILURE SCENARIOS\nWe provide additional examples from 10 randomly selected (no cherry picking) MNIST failure scenarios. We include the annotator consensus label for each failure scenario.\nD.2 ADDITIONAL SAMPLES FROM GERMAN SIGNS FAILURE SCENARIOS\nWe provide samples from all of the German signs failure scenarios. We provide the names of the class labels in figure 25. For each failure scenario, we indicate our assigned class label in the caption and the classifier predictions in the upper right hand corner of the image.\nD.3 ADDITIONAL SAMPLES FROM SVHN FAILURE SCENARIOS\nWe provide additional samples from each of the SVHN failure scenarios. The digit in the upper left hand corner is the classifier predicted label. The caption includes the Ground Truth worker labels."
}
]
 2,020
 DEFUSE: DEBUGGING CLASSIFIERS THROUGH DIS

SP:bbaedd5d8e7591fa3a5587260bf19f3d05779976
 [
"The paper proposes a model for *variable selection* in *Mixed Integer Programming (MIP)* solvers. While this problem is clearly a sequential decision making task, modeling it as an MDP is challenging. As a result, existing works use other approaches such as ranking or imitation learning. This paper overcomes these challenges by introducing a new problem representation. "
]
 BranchandBound (B&B) is a general and widely used algorithm paradigm for solving Mixed Integer Programming (MIP). Recently there is a surge of interest in designing learningbased branching policies as a fast approximation of strong branching, a humandesigned heuristic. In this work, we argue that strong branching is not a good expert to imitate for its poor decision quality when turning off its side effects in solving branch linear programming. To obtain more effective and nonmyopic policies than a local heuristic, we formulate the branching process in MIP as reinforcement learning (RL) and design a novel set representation and distance function for the B&B process associated with a policy. Based on such representation, we develop a novelty search evolutionary strategy for optimizing the policy. Across a range of NPhard problems, our trained RL agent significantly outperforms expertdesigned branching rules and the stateoftheart learningbased branching methods in terms of both speed and effectiveness. Our results suggest that with carefully designed policy networks and learning algorithms, reinforcement learning has the potential to advance algorithms for solving MIPs.
 []
 [
{
"authors": [
"Tobias Achterberg"
],
"title": "Conflict analysis in mixed integer programming",
"venue": "Discrete Optimization4(1):,",
"year": 2007
},
{
"authors": [
"Tobias Achterberg"
],
"title": "Scip: solving constraint integer programs",
"venue": "Mathematical Programming Computation1",
"year": 2009
},
{
"authors": [
"Tobias Achterberg",
"Timo Berthold"
],
"title": "Hybrid branching. In International Conference on AI and OR Techniques in Constriant Programming for Combinatorial Optimization Problems",
"venue": null,
"year": 2009
},
{
"authors": [
"Tobias Achterberg",
"Roland Wunderling"
],
"title": "Mixed integer programming: Analyzing 12 years of progress. In Facets of combinatorial optimization pp",
"venue": null,
"year": 2013
},
{
"authors": [
"Tobias Achterberg",
"Thorsten Koch",
"Alexander Martin"
],
"title": "Branching rules revisited",
"venue": "Operations Research Letters33(1):,",
"year": 2005
},
{
"authors": [
"Réka Albert",
"AlbertLászló Barabási"
],
"title": "Statistical mechanics of complex networks",
"venue": "Reviews of modern physics74(1):,",
"year": 2002
},
{
"authors": [
"Alejandro Marcos Alvarez",
"Quentin Louveaux",
"Louis Wehenkel"
],
"title": "A machine learningbased approximation of strong branching",
"venue": "INFORMS Journal on Computing29(1):,",
"year": 2017
},
{
"authors": [
"Karl J Astrom"
],
"title": "Optimal control of markov processes with incomplete state information",
"venue": "Journal of mathematical analysis and applications10(1):,",
"year": 1965
},
{
"authors": [
"Egon Balas",
"Andrew Ho"
],
"title": "Set covering algorithms using cutting planes, heuristics, and subgradient optimization: a computational study",
"venue": "In Combinatorial Optimization pp. . Springer,",
"year": 1980
},
{
"authors": [
"Cynthia Barnhart",
"Amy M Cohn",
"Ellis L Johnson",
"Diego Klabjan",
"George L Nemhauser",
"Pamela H Vance"
],
"title": "Airline crew scheduling",
"venue": "In Handbook of transportation science pp. . Springer,",
"year": 2003
},
{
"authors": [
"Yoshua Bengio",
"Andrea Lodi",
"Antoine Prouvost"
],
"title": "Machine learning for combinatorial optimization: a methodological tour d’horizon",
"venue": "European Journal of Operational Research,",
"year": 2020
},
{
"authors": [
"Timo Berthold"
],
"title": "Primal heuristics for mixed integer programs",
"venue": null,
"year": 2006
},
{
"authors": [
"Edoardo Conti",
"Vashisht Madhavan",
"Felipe Petroski Such",
"Joel Lehman",
"Kenneth Stanley",
"Jeff Clune"
],
"title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of noveltyseeking agents",
"venue": "In Advances in neural information processing systems",
"year": 2018
},
{
"authors": [
"Gérard Cornuéjols",
"Ranjani Sridharan",
"JeanMichel"
],
"title": "Thizy. A comparison of heuristics and relaxations for the capacitated plant location problem",
"venue": "European journal of operational research50(3):,",
"year": 1991
},
{
"authors": [
"Giovanni Di Liberto",
"Serdar Kadioglu",
"Kevin Leo",
"Yuri Malitsky. Dash"
],
"title": "Dynamic approach for switching heuristics",
"venue": "European Journal of Operational",
"year": 2016
},
{
"authors": [
"Marc Etheve",
"Zacharie Alès",
"Côme Bissuel",
"Olivier Juan",
"Safia KedadSidhoum"
],
"title": "Reinforcement learning for variable selection in a branch and bound algorithm",
"venue": "arXiv preprint arXiv:2005.10026,",
"year": 2020
},
{
"authors": [
"Gerald Gamrath",
"Daniel Anderson",
"Ksenia Bestuzheva",
"WeiKun Chen",
"Leon Eifler",
"Maxime Gasse",
"Patrick Gemander",
"Ambros Gleixner",
"Leona Gottwald",
"Katrin Halbig"
],
"title": "The scip optimization suite",
"venue": null,
"year": 2020
},
{
"authors": [
"Maxime Gasse",
"Didier Chételat",
"Nicola Ferroni",
"Laurent Charlin",
"Andrea Lodi"
],
"title": "Exact combinatorial optimization with graph convolutional neural networks",
"venue": "In Advances in Neural Information Processing Systems",
"year": 2019
},
{
"authors": [
"Christoph Hansknecht",
"Imke Joormann",
"Sebastian Stiller"
],
"title": "Cuts, primal heuristics, and learning to branch for the timedependent traveling salesman problem",
"venue": "arXiv preprint arXiv:1805.01415,",
"year": 2018
},
{
"authors": [
"Elias Boutros Khalil",
"Pierre Le Bodic",
"Le Song",
"George Nemhauser",
"Bistra Dilkina"
],
"title": "Learning to branch in mixed integer programming",
"venue": "In Thirtieth AAAI Conference on Artificial Intelligence,",
"year": 2016
},
{
"authors": [
"Rafael A Melo",
"Laurence A Wolsey"
],
"title": "Mip formulations and heuristics for twolevel productiontransportation problems",
"venue": "Computers & Operations",
"year": 2012
},
{
"authors": [
"Robert Rodosek",
"Mark G Wallace",
"Mozafar T Hajian"
],
"title": "A new approach to integrating mixed integer programming and constraint logicprogramming",
"venue": "Annals of Operations",
"year": 1999
},
{
"authors": [
"Tim Salimans",
"Jonathan Ho",
"Xi Chen",
"Szymon Sidor",
"Ilya Sutskever"
],
"title": "Evolution strategies as a scalable alternative to reinforcement learning",
"venue": "arXiv preprint arXiv:1703.03864,",
"year": 2017
},
{
"authors": [
"Cédric Villani"
],
"title": "Optimal transport: old and new, volume 338",
"venue": "Springer Science & Business Media,",
"year": 2008
},
{
"authors": [
"Minjie Wang",
"Lingfan Yu",
"Da Zheng",
"Quan Gan",
"Yu Gai",
"Zihao Ye",
"Mufei Li",
"Jinjing Zhou",
"Qi Huang",
"Chao Ma"
],
"title": "Deep graph library: Towards efficient and scalable deep learning on graphs",
"venue": "arXiv preprint arXiv:1909.01315,",
"year": 2019
},
{
"authors": [
"Daan Wierstra",
"Tom Schaul",
"Jan Peters",
"Juergen Schmidhuber"
],
"title": "Natural evolution strategies",
"venue": "IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)",
"year": 2008
},
{
"authors": [
"Laurence A Wolsey",
"George L Nemhauser"
],
"title": "Integer and combinatorial optimization, volume 55",
"venue": null,
"year": 1999
}
]
 [
{
"heading": "1 INTRODUCTION",
"text": "Mixed Integer Programming (MIP) has been applied widely in many realworld problems, such as scheduling (Barnhart et al., 2003) and transportation (Melo & Wolsey, 2012). Branch and Bound (B&B) is a general and widely used paradigm for solving MIP problems (Wolsey & Nemhauser, 1999). B&B recursively partitions the solution space into a search tree and compute relaxation bounds along the way to prune subtrees that provably can not contain an optimal solution. This iterative process requires sequential decision makings: node selection: selecting the next solution space to evaluate, variable selection: selecting the variable by which to partition the solution space (Achterberg & Berthold, 2009). In this work, we focus on learning a variable selection strategy, which is the core of the B&B algorithm (Achterberg & Wunderling, 2013).\nVery often, instances from the same MIP problem family are solved repeatedly in industry, which gives rise to the opportunity for learning to improve the variable selection policy (Bengio et al., 2020). Based on the humandesigned heuristics, Di Liberto et al. (2016) learn a classifier that dynamically selects an existing rule to perform variable selection; Balcan et al. (2018) consider a weighted score of multiple heuristics and analyse the sample complexity of finding such a good weight. The first step towards learning a variable selection policy was taken by Khalil et al. (2016), who learn an instance customized policy in an online fashion, as well as Alvarez et al. (2017) and Hansknecht et al. (2018) who learn a branching rule offline on a collection of similar instances. Those methods need extensively feature engineering and require strong domain knowledge in MIP. To avoid that, Gasse et al. (2019) propose a graph convolutional neural network approach to obtain competitive performance, only requiring raw features provided by the solver. In each case, the branching policy is learned by imitating the decision of strong branching as it consistently leads to the smallest B&B trees empirically (Achterberg et al., 2005).\nIn this work, we argue that strong branching is not a good expert to imitate. The excellent performance (the smallest B&B tree) of strong branching relies mostly on the information obtained in solving branch linear programming (LP) rather than the decision it makes. This factor prevents learning a good policy by imitating only the decision made by strong branching. To obtain more effective and nonmyopic policies,i.e. minimizing the total solving nodes rather than maximizing the immediate duality gap gap, we use reinforcement learning (RL) and model the variable selection process as a Markov Decision Process (MDP). Though the MDP formulation for MIP has been mentioned in the previous works (Gasse et al., 2019; Etheve et al., 2020), the advantage of RL has not been demonstrated clearly in literature.\nThe challenges of using RL are multifold. First, the state space is a complex search tree, which can involve hundreds or thousands of nodes (with a linear program on each node) and evolve over time. In the meanwhile, the objective of MIP is to solve problems faster. Hence a tradeoff between decision quality and computation time is required when representing the state and designing a policy based on this state representation. Second, learning a branching policy by RL requires rolling out on a distribution of instances. Moreover, for each instance, the solving trajectory could contain thousands of steps and actions can have longlasting effects. These result in a large variance in gradient estimation. Third, each step of variable selection can have hundreds of candidates. The large action set makes the exploration in MIP very hard.\nIn this work, we address these challenges by designing a policy network inspired by primaldual iteration and employing a novelty search evolutionary strategy (NSES) to improve the policy. For efficiencyeffectiveness tradeoff, the primaldual policy ignores the redundant information and makes highquality decisions on the fly. For reducing variance, the ES algorithm is an attractive choice as its gradient estimation is independent of the trajectory length (Salimans et al., 2017). For exploration, we introduce a new representation of the B&B solving process employed by novelty search (Conti et al., 2018) to encourage visiting new states.\nWe evaluate our RL trained agent over a range of problems (namely, set covering, maximum independent set, capacitated facility location). The experiments show that our approach significantly outperforms stateoftheart humandesigned heuristics (Achterberg & Berthold, 2009) as well as imitation based learning methods (Khalil et al., 2016; Gasse et al., 2019). In the ablation study, we compare our primaldual policy net with GCN (Gasse et al., 2019), our novelty based ES with vanilla ES (Salimans et al., 2017). The results confirm that both our policy network and the novelty search evolutionary strategy are indispensable for the success of the RL agent. In summary, our main contributions are the followings:\n• We point out the overestimation of the decision quality of strong branching and suggest that methods other than imitating strong branching are needed to find better variable selection policy. • We model the variable selection process as MDP and design a novel policy net based on primaldual iteration over reduced LP relaxation. • We introduce a novel set representation and optimal transport distance for the branching process associated with a policy, based on which we train our RL agent using novelty search evolution strategy and obtain substantial improvements in empirical evaluation."
},
{
"heading": "2 BACKGROUND",
"text": "Mixed Integer Programming. MIP is an optimization problem, which is typically formulated as\nminx∈Rn {cTx : Ax ≤ b, ` ≤ x ≤ u, xj ∈ Z, ∀j ∈ J} (1)\nwhere c ∈ Rn is the objective vector, A ∈ Rm×n is the constraint coefficient matrix, b ∈ Rm is the constraint vector, `,u ∈ Rn are the variable bounds. The set J ⊆ {1, · · · , n} is an index set for integer variables. We denote the feasible region of x as X .\nLinear Programming Relaxation. LP relaxation is an important building block for solving MIP problems, where the integer constraints are removed:\nminx∈Rn {cTx : Ax ≤ b, ` ≤ x ≤ u}. (2)\nAlgorithm 1: Branch and Bound Input: A MIP P in form Equation 1 Output: An optimal solution set x∗ and\noptimal value c∗ 1 Initialize the problem set S := {PLP }. where PLP is in form Equation 2. Set x∗ = φ, c∗ =∞ ; 2 If S = φ, exit by returning x∗ and c∗ ; 3 Select and pop a LP relaxation Q ∈ S ; 4 Solve Q with optimal solution x̂ and optimal\nvalue ĉ ; 5 If ĉ ≥ c∗, go to 2 ; 6 If x̂ ∈ X , set x∗ = x̂, c∗ = ĉ, go to 2 ; 7 Select variable j, split Q into two subproblems Q+j and Q − j , add them to S and go to 3 ;\nBranch and Bound. LP based B&B is the most successful method in solving MIP. A typical LP based B&B algorithm for solving MIP looks as Algorithm 1 (Achterberg et al., 2005).\nIt consists of two major decisions: node selection, in line 3, and variable selection, in line 7. In this paper, we will focus on the variable selection. Given a LP relaxation and its optimal solution x̂, the variable selection means selecting an index j. Then, branching splits the current problem into two subproblems, each representing the original LP relaxation with a new constraint xj ≤ bx̂jc for Q−j and xj ≥ dx̂je for Q + j respectively. This procedure can be visualized by a binary tree, which is commonly called search tree. We give a simple visualization in Section A.1.\nEvolution Strategy. Evolution Strategies (ES) is a class of black box optimization algorithm (Rechenberg, 1978). In this work, we refer to the definition in Natural Evolution Strategies (NES) (Wierstra et al., 2008). NES represents the population as a distribution of parameter vectors θ characterized by parameters φ : pφ(θ). NES optimizes φ to maximize the expectation of a fitness f(θ) over the population Eθ∼pφ [f(θ)]. In recent work, Salimans et al. (2017) outlines a version of NES applied to standard RL benchmark problems, where θ parameterizes the policy πθ, φt = (θt, σ) parameterizes a Gaussian distribution pφ(θ) = N (θt, σ2I) and f(θ) is the cumulative reward R(θ) over a full agent interaction. At every iteration, Salimans et al. (2017) apply n additive Gaussian noises to the current parameter and update the population as\nθt+1 = θt + α 1\nnσ n∑ i=1 f(θt + σ i) i (3)\nTo encourage exploration, Conti et al. (2018) propose Novelty Search Evolution Strategy (NSES). In NSES, the fitness function f(θ) = λN(θ)+(1−λ)R(θ) is selected as a combination of domain specific novelty score N and cumulative reward R, where λ is the balancing weight."
},
{
"heading": "3 WHY IMITATING STRONG BRANCHING IS NOT GOOD",
"text": "Strong branching is a humandesigned heuristic, which solves all possible branch LPs Q+j , Q − j ahead of branching. As strong branching usually produces the smallest B&B search trees (Achterberg, 2009), many learningbased variable selection policy are trained by mimicking strong branching (Gasse et al., 2019; Khalil et al., 2016; Alvarez et al., 2017; Hansknecht et al., 2018). However, we claim that strong branching is not a good expert: the reason strong branching can produce a small search tree is the reduction obtained in solving branch LP, rather than its decision quality. Specifically, (i) Strong branching can check lines 5, 6 in Algorithm 1 before branching. If the pruning condition is satisfied, strong branching does not need to add the subproblem into the problem set S. (ii) Strong branching can strengthen other LP relaxations in the problem set S via domain propagation (Rodosek et al., 1999) and conflict analysis (Achterberg, 2007). For example, if strong branching finds x1 ≥ 1 and x2 ≥ 1 can be pruned during solving branch LP, then any other LP relaxations containing x1 ≥ 1 can be strengthened by adding x2 ≤ 0. These two reductions are\nthe direct consequence of solving branch LP, and they can not be learned by a variable selection policy. (iii) Strong branching activates primal heuristics (Berthold, 2006) after solving LPs.\nTo examine the decision quality of strong branching, we employ vanilla full strong branching (Gamrath et al., 2020), which takes the same decision as full strong branching, while the sideeffect of solving branch LP is switched off. Experiments in Section 5.2 show that vanilla full strong branching has poor decision quality. Hence, imitating strong branching is not a wise choice for learning variable selection policy."
},
{
"heading": "4 METHOD",
"text": "Due to line 5 in Algorithm 1, a good variable selection policy can significantly improve solving efficiency. To illustrate how to improve variable selection policy, we organize this section in three parts. First, we present our formulation of the variable selection process as a RL problem. Next, we introduce the LP relaxation based state representation and the primaldual based policy network. Then, we introduce our branching process representation and the corresponding NSES training algorithm."
},
{
"heading": "4.1 RL FORMULATION",
"text": "Let the B&B algorithm and problem distribution D be the environment. The sequential decision making of variable selection can be formulated as a Markov decision process. We specify state space S, action space A, transition P and reward r as follows • State Space. At iteration t, node selection policy will pop out a LP relaxation PLP from the problem set S. We set the representation of the state to st = {PLP , J, S}, where J is the index set of integer varia 