source
stringlengths
273
149k
source_labels
sequence
paper_id
stringlengths
9
11
target
stringlengths
18
668
Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks. One particular is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum. In the past decade, deep neural networks BID8 have become a popular tool that has successfully solved many challenging tasks in a variety of areas such as machine learning, artificial intelligence, computer vision, and natural language processing, etc. As the understandings of deep neural networks from different aspects are mostly based on empirical studies, there is a rising need and interest to develop understandings of neural networks from theoretical aspects such as generalization error, representation power, and landscape (also referred to as geometry) properties, etc. In particular, the landscape properties of loss functions (that are typically nonconex for neural networks) play a central role to determine the iteration path and convergence performance of optimization algorithms. One major landscape property is the nature of critical points, which can possibly be global minima, local minima, saddle points. There have been intensive efforts in the past into understanding such an issue for various neural networks. For example, it has been shown that every local minimum of the loss function is also a global minimum for shallow linear networks under the autoencoder setting and invertibility assumptions BID1 and for deep linear networks BID11; BID14; respectively under different assumptions. The conditions on the equivalence between local minimum or critical point and global minimum has also been established for various nonlinear neural networks; BID9; BID15; BID17; BID6 under respective assumptions. However, most previous studies did not provide characterization of analytical forms for critical points of loss functions for neural networks with only very few exceptions. In BID1, the authors provided an analytical form for the critical points of the square loss function of shallow linear networks under certain conditions. Such an analytical form further helps to establish the landscape properties around the critical points. Further in BID13, the authors characterized certain sufficient form of critical points for the square loss function of matrix factorization problems and deep linear networks. The focus of this paper is on characterizing the sufficient and necessary forms of critical points for broader scenarios, i.e., shallow and deep linear networks with no assumptions on data matrices and network dimensions, and shallow ReLU networks over certain parameter space. In particular, such analytical forms of critical points capture the corresponding loss function values and the necessary and sufficient conditions to achieve global minimum. This further enables us to establish new landscape properties around these critical points for the loss function of these networks under general settings, and provides alternative (yet simpler and more intuitive) proofs for existing understanding of the landscape properties. OUR CONTRIBUTION 1) For the square loss function of linear networks with one hidden layer, we provide a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers. These generalize the characterization in BID1 to arbitrary network parameter dimensions and any data matrices. Such a generalization further enables us to establish the landscape property, i.e., every local minimum is also a global minimum and all other critical points are saddle points, under no assumptions on parameter dimensions and data matrices. From a technical standpoint, we exploit the analytical forms of critical points to provide a new proof for characterizing the landscape around the critical points under full relaxation of assumptions, where the corresponding approaches in BID1 are not applicable. As a special case of linear networks, the matrix factorization problem satisfies all these landscape properties.2) For the square loss function of deep linear networks, we establish a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers. Such characterizations are new and have not been established in the existing art. Furthermore, such analytical form divides the set of non-global-minimum critical points into different categories. We identify the directions along which the loss function value decreases for two categories of the critical points, for which our directly implies the equivalence between the local minimum and the global minimum. For these cases, our proof generalizes the in BID11 under no assumptions on the network parameter dimensions and data matrices.3) For the square loss function of one-hidden-layer nonlinear neural networks with ReLU activation function, we provide a full characterization of both the existence and the analytical forms of the critical points in certain types of regions in the parameter space. Particularly, in the case where there is one hidden unit, our fully characterize the existence and the analytical forms of the critical points in the entire parameter space. Such characterization were not provided in previous work on nonlinear neural networks. Moreover, we apply our to a concrete example to demonstrate that both local minimum that is not a global minimum and local maximum do exist in such a case. Analytical forms of critical points: Characterizing the analytical form of critical points for loss functions of neural networks dates back to BID1, where the authors provided an analytical form of the critical points for the square loss function of linear networks with one hidden layer. In BID13, the authors provided a sufficient condition of critical points of a generic function, i.e., the fixed point of invariant groups. They then characterized certain sufficient forms of critical points for the square loss function of matrix factorization problems and deep linear networks, whereas our provide sufficient and necessary forms of critical points for deep linear networks via a different approach. Properties of critical points: BID1; BID0 studied the linear autoencoder with one hidden layer and showed the equivalence between the local minimum and the global minimum. Moreover, BID2 generalized these to the complex-valued autoencoder setting. The deep linear networks were studied by some recent work BID11; BID14 , in which the equivalence between the local minimum and the global minimum was established respectively under different assumptions. established a necessary and sufficient condition for a critical point of the deep linear network to be a global minimum. A similar was established in BID7 for deep linear networks under the setting that the widths of intermediate layers are larger than those of the input and output layers. The effect of regularization on the critical points for a two-layer linear network was studied in.For nonlinear neural networks, studied a nonlinear neural network with one hidden layer and sigmoid activation function, and showed that every local minimum is also a global minimum provided that the number of input units equals the number of data samples. BID9 considered a class of multi-layer nonlinear networks with a pyramidal structure, and showed that all critical points of full column rank achieve the zero loss when the sample size is less than the input dimension. These were further generalized to a larger class of nonlinear networks in BID15, in which they also showed that critical points with non-degenerate Hessian are global minimum. BID3 b) connected the loss surface of deep nonlinear networks with the Hamiltonian of the spin-glass model under certain assumptions and characterized the distribution of the local minimum. BID11 further eliminated some of the assumptions in BID3, and established the equivalence between the local minimum and the global minimum by reducing the loss function of the deep nonlinear network to that of the deep linear network. BID17 showed that a two-layer nonlinear network has no bad differentiable local minimum. BID6 studied a one-hidden-layer nonlinear neural network with the parameters restricted in a set of directions of lines, and showed that most local minima are global minima. considered a two-layer ReLU network with Gaussian input data, and showed that critical points in certain region are non-isolated and characterized the critical-point-free regions. Geometric curvature BID10 established the gradient dominance condition of deep linear residual networks, and further established the gradient dominance condition and regularity condition around the global minimizers for deep linear, deep linear residual and shallow nonlinear networks. BID12 studied the property of the Hessian matrix for deep linear residual networks. The local strong convexity property was established in BID16 for overparameterized nonlinear networks with one hidden layer and quadratic activation functions, and was established in for a class of nonlinear networks with one hidden layer and Gaussian input data. further established the local linear convergence of gradient descent method with tensor initialization. BID18 studied a one-hidden-layer nonlinear network with a single output, and showed that the volume of sub-optimal differentiable local minima is exponentially vanishing in comparison with the volume of global minima. BID5 investigated the saddle points in deep neural networks using the from statistical physics and random matrix theory. Notation: The pseudoinverse, column space and null space of a matrix M are denoted by M †, col(M) and ker(M), respectively. For any index sets I, J ⊂ N, M I,J denotes the submatrix of M formed by the entries with the row indices in I and the column indices in J. For positive integers i ≤ j, we define i: j = {i, i + 1, . . ., j − 1, j}. The projection operator onto a linear subspace V is denoted by P V. In this section, we study linear neural networks with one hidden layer. Suppose we have an input data matrix X ∈ R d0×m and a corresponding output data matrix Y ∈ R d2×m, where there are in total m data samples. We are interested in learning a model that maps from X to Y via a linear network with one hidden layer. Specifically, we denote the weight parameters between the output layer and the hidden layer of the network as A 2 ∈ R d2×d1, and denote the weight parameters between the hidden layer and the input layer of the network as A 1 ∈ R d1×d0. We are interested in the square loss function of this linear network, which is given by DISPLAYFORM0 Note that in a special case where X = I, L reduces to a loss function for the matrix factorization problem, to which all our apply. The loss function L has been studied in BID1 under the assumptions that d 2 = d 0 ≥ d 1 and the matrices XX, Y X (XX) −1 XY are invertible. In our study, no assumption is made on either the parameter dimensions or the invertibility of the data matrices. Such full generalization of the in BID1 turns out to be critical for our study of nonlinear shallow neural networks in Section 4.We further define Σ:= Y X † XY and denote its full singular value decomposition as U ΛU. Suppose that Σ has r distinct positive singular values σ 1 > · · · > σ r > 0 with multiplicities m 1,..., m r, respectively, and hasm zero singular values. Recall that DISPLAYFORM1 Our first provides a full characterization of all critical points of L. Theorem 1 (Characterization of critical points). All critical points of L are necessarily and sufficiently characterized by a matrix L 1 ∈ R d1×d0, a block matrix V ∈ R d2×d1 and an invertible matrix C ∈ R d1×d1 via DISPLAYFORM2 DISPLAYFORM3, where both V i ∈ R mi×pi and V ∈ Rm ×p consist of orthonormal columns with the number of columns DISPLAYFORM4 Theorem 1 characterizes the necessary and sufficient forms for all critical points of L. Intuitively, the matrix C captures the invariance of the product A 2 A 1 under an invertible transform, and L 1 captures the degree of freedom of the solution set for linear systems. In general, the set of critical points is uncountable and cannot be fully listed out. However, the analytical forms in eqs. and do allow one to construct some critical points of L by specifying choices of L 1, V, C that fulfill the condition in eq.. For example, choosing L 1 = 0 guarantees eq., in which case eqs. and yield a critical point (C −1 V U Y X †, U V C) for any invertible matrix C and any block matrix V that takes the form specified in Theorem 1. For nonzero L 1, one can fix a proper V and solve the linear equation on C in eq.. If a solution exists, we then obtain the form of a corresponding critical point. We further note that the analytical structures of the critical points are more important, which have direct implications on the global optimality conditions and landscape properties as we show in the remaining part of the section. Remark 1. We note that the block pattern parameters {p i} r i=1 andp denote the number of columns of {V i} r i=1 and V, respectively, and their sum equals the rank of A 2, i.e., DISPLAYFORM5 The parameters p i, i = 1,..., r,p of V contain all useful information of the critical points that determine the function value of L as presented in the following proposition. DISPLAYFORM6 Proposition 1 evaluates the function value L at a critical point using the parameters {p i} r i=1. To explain further, recall that the data matrix Σ has each singular value σ i with multiplicity m i. For each i, the critical point captures p i out of m i singular values σ i. Hence, for a σ i with larger value (i.e., a smaller index i), it is desirable that a critical point captures a larger number p i of them. In this way, the critical point captures more important principle components of the data so that the value of the loss function is further reduced as suggested by Proposition 1. In summary, the parameters {p i} r i=1 characterize how well the learned model fits the data in terms of the value of the loss function. Moreover, the parameters {p i} r i=1 also determine a full characterization of the global minimizers as given below. Proposition 2 (Characterization of global minimizers). A critical point (A 1, A 2) of L is a global minimizer if and only if it falls into the following two cases. DISPLAYFORM7 The analytical form of any global minimizer can be obtained from Theorem 1 with further specification to the above two cases. Proposition 2 establishes the neccessary and sufficient conditions for any critical point to be a global minimizer. If the data matrix Σ has a large number of nonzero singular values, i.e., the first case, one needs to exhaust the representation budget (i.e., rank) of A 2 and capture as many large singular values as the rank allows to achieve the global minimum; Otherwise, A 2 of a global minimizer can be non-full rank and still captures all nonzero singular values. Note that A 2 must be full rank in the case 1, and so is A 1 if we further adopt the assumptions on the network size and data matrices in BID1. Furthermore, the parameters {p i} r i=1 naturally divide all non-global-minimum critical points (A 1, A 2) of L into the following two categories.• (Non-optimal order): The matrix V specified in Theorem 1 satisfies that there exists 1 ≤ i < j ≤ r such that p i < m i and p j > 0.• (Optimal order): rank(A 2) < min{d 2, d 1} and the matrix V specified in Theorem 1 satisfies that DISPLAYFORM8 To understand the above two categories, note that a critical point of L with non-optimal order captures a smaller singular value σ j (since p j > 0) while skipping a larger singular value σ i with a lower index i < j (since p i < m i), and hence cannot be a global minimizer. On the other hand, although a critical point of L with optimal order captures the singular values in the optimal (i.e., decreasing) order, it does not fully utilize the representation budget of A 2 (because A 2 is non-full rank) to further capture nonzero singular values and reduce the function value, and hence cannot be a global minimizer either. Next, we show that these two types of non-global-minimum critical points have different landscape properties around them. Throughout, a matrix M is called the perturbation of M if it lies in an arbitrarily small neighborhood of M.Proposition 3 (Landscape around critical points). The critical points of L have the following landscape properties.1. A non-optimal-order critical point (A 1, A 2) has a perturbation (A 1, A 2) with rank(A 2) = rank(A 2), which achieves a lower function value; 2. An optimal-order critical point (A 1, A 2) has a perturbation (A 1, A 2) with rank(A 2) = rank(A 2) + 1, which achieves a lower function value; 3. Any point in X:= {(A 1, A 2): A 2 A 1 X = 0} has a perturbation (A 1, A 2), which achieves a higher function value;As a consequence, items 1 and 2 imply that any non-global-minimum critical point has a descent direction, and hence cannot be a local minimizer. Thus, any local minimizer must be a global minimizer. Item 3 implies that any point has an ascent direction whenever the output is nonzero. Hence, there does not exist any local/global maximizer in X. Furthermore, item 3 together with items 1 and 2 implies that any non-global-minimum critical point in X has both descent and ascent directions, and hence must be a saddle point. We summarize these facts in the following theorem. Theorem 2 (Landscape of L). The loss function L satisfies: 1) every local minimum is also a global minimum; 2) every non-global-minimum critical point in X is a saddle point. We note that the saddle points in Theorem 2 can be non-strict when the data matrices are singular. As an illustrative example, consider the following loss function of a shallow linear network L(a 2, a 1) = 1 2 (a 2 a 1 x − y) 2, where a 1, a 2, x and y are all scalars. Consider the case y = 0. Then, the Hessian at the saddle point a 1 = 0, a 2 = 1 is [x 2, 0; 0, 0], which does not have any negative eigenvalue. From a technical point of view, the proof of item 1 of Proposition 3 applies that in BID0 and generalizes it to the setting where Σ can have repeated singular values and may not be invertible. To further understand the perturbation scheme from a high level perspective, note that non-optimalorder critical points capture a smaller singular value σ j instead of a larger one σ i with i < j. Thus, one naturally perturbs the singular vector corresponding to σ j along the direction of the singular vector corresponding to σ i. Such a perturbation scheme preserves the rank of A 2 and reduces the value of the loss function. More importantly, the proof of item 2 of Proposition 3 introduces a new technique. As a comparison, BID1 proves a similar as item 2 using the strict convexity of the function, which requires the parameter dimensions to satisfy d 2 = d 0 ≥ d 1 and the data matrices to be invertible. In contrast, our proof completely removes these restrictions by introducing a new perturbation direction and exploiting the analytical forms of critical points in eqs. and and the condition in eq.. The accomplishment of the proof further requires careful choices of perturbation parameters as well as judicious manipulations of matrices. We refer the reader to the supplemental materials for more details. As a high level understanding, since optimal-order critical points capture the singular values in an optimal (i.e., decreasing) order, the previous perturbation scheme for non-optimal-order critical points does not apply. Instead, we increase the rank of A 2 by one in a way that the perturbed matrix captures the next singular value beyond the ones that have already been captured so that the value of the loss function can be further reduced. In this section, we study deep linear networks with ≥ 2 layers. We denote the weight parameters between the layers as A k ∈ R d k ×d k−1 for k = 1,...,, respectively. The input and output data are denoted by X ∈ R d0×m, Y ∈ R d ×m, respectively. We are interested in the square loss function of deep linear networks, which is given by DISPLAYFORM0, respectively, andm(k) zero singular values. Our first provides a full characterization of all critical points of L D, where we denote DISPLAYFORM1 Theorem 3 (Characterization of critical points). All critical points of L D are necessarily and sufficiently characterized by matrices DISPLAYFORM2.., A can be individually expressed out recursively via the following two equations: DISPLAYFORM3 DISPLAYFORM4 Note that the forms of the individual parameters A 1,..., A can be obtained as follows by recursively applying eqs. and. First, eq. with k = 0 yields the form of A (,2). Then, eq. with k = 0 and the form of A (,2) yield the form of A 1. Next, eq. with k = 1 yields the form of A (,3), and then, eq. with k = 1 and the forms of A (,3), A 1 further yield the form of A 2. Inductively, one obtains the expressions of all individual parameter matrices. Furthermore, the first condition in eq. FORMULA13 is a consistency condition that guarantees that the analytical form for the entire product of parameter matrices factorizes into the forms of individual parameter matrices. Similarly to shallow linear networks, while the set of critical points here is also uncountable, Theorem 3 suggests ways to obtain some critical points. For example, if we set L k = 0 for all k (i.e., eq. is satisfied), we can obtain the form of critical points for any invertible C k and proper V k with the structure specified in Theorem 3. For nonzero L k, eq. needs to be verified for given C k and V k to determine a critical point. Similarly to shallow linear networks, the parameters {p i} r i=1,p determine the value of the loss function at the critical points and further specify the analytical form for the global minimizers, as we present in the following two propositions. DISPLAYFORM5 DISPLAYFORM6 In particular, A (,2) can be non-full rank with rank(A (,2) ) = DISPLAYFORM7 The analytical form of any global minimizer can be obtained from Theorem 3 with further specification to the above two cases. In particular for case 1, if we further adopt the invertibility assumptions on data matrices as in BID1 and assume that all parameter matrices are square, then all global minima must correspond to full rank parameter matrices. We next exploit the analytical forms of the critical points to further understand the landscape of the loss function L D. It has been shown in BID11 that every local minimum of L D is also a global minimum, under certain conditions on the parameter dimensions and the invertibility of the data matrices. Here, our characterization of the analytical forms for the critical points allow us to understand such a from an alternative viewpoint. The proofs for certain cases (that we discuss below) are simpler and more intuitive, and no assumption is made on the data matrices and dimensions of the network. Similarly to shallow linear networks, we want to understand the local landscape around the critical points. However, due to the effect of depth, the critical points of L D are more complicated than those of L. Among them, we identify the following subsets of the non-global-minimum critical DISPLAYFORM8 • (Deep-non-optimal order): There exist 0 ≤ k ≤ − 2 such that the matrix V k specified in Theorem 3 satisfies that there exist 1 ≤ i < j ≤ r(k) such that p i (k) < m i (k) and p j (k) > 0.• (Deep-optimal order): (A, A −1) is not a global minimizer of L D with A (−2,1) being fixed, rank(A) < min{d, d −1}, and the matrix V −2 specified in Theorem 3 satisfies that DISPLAYFORM9 The following summarizes the landscape of L D around the above two types of critical points. The loss function L D has the following landscape properties. deep-non-optimal-order critical point (A 1, . . ., A) has a perturbation (A 1, . . ., A k+1, . . ., A) with rank(A) = rank(A), which achieves a lower function value. 2. A deep-optimal-order critical point (A 1, . . ., A) has a perturbation (A 1, . . ., A −1, A) with rank(A) = rank(A) + 1, which achieves a lower function value. 3. Any point in X D:= {(A 1, . . ., A): A (,1) X = 0} has a perturbation (A 1, . . ., A) that achieves a higher function value. Consequently, 1) every local minimum of L D is also a global minimum for the above two types of critical points; and 2) every critical point of these two types in X D is a saddle point. Theorem 4 implies that the landscape of L D for deep linear networks is similar to that of L for shallow linear networks, i.e., the pattern of the parameters {p i (k)} r(k) i=1 implies different descent directions of the function value around the critical points. Our approach does not handle the remaining set of non-global minimizers, i.e., there exists q ≤ −1 such that (A, . . ., A q) is a global minimum point of L D with A (q−1,1) being fixed, and A (,q) is of optimal order. It is unclear how to perturb the intermediate weight parameters using their analytical forms for deep networks, and we leave this as an open problem for the future work. In this section, we study nonlinear neural networks with one hidden layer. In particular, we consider nonlinear networks with ReLU activation function σ: R → R that is defined as σ(x):= max{x, 0}. Our study focuses on the set of differentiable critical points. The weight parameters between the layers are denoted by A 2 ∈ R d2×d1, A 1 ∈ R d1×d0, respectively, and the input and output data are denoted by X ∈ R d0×m, Y ∈ R d2×m, respectively. We are interested in the square loss function which is given by DISPLAYFORM0 where σ acts on A 1 X entrywise. Existing studies on nonlinear networks characterized the sufficient conditions for critical points being global minimum BID9 Since the activation function σ is piecewise linear, the entire parameter space can be partitioned into disjoint cones. In particular, we consider the set of cones K I×J where I ⊂ {1, . . ., d 1}, J ⊂ {1, . . ., m} that satisfy DISPLAYFORM1 where "≥" and "<" represent entrywise comparisons. Within K I×J, the term σ(A 1 X) activates only the entries σ(A 1 X) I:J, and the corresponding loss function L N is equivalent to DISPLAYFORM2 Hence, within K I×J, L N reduces to the loss of a shallow linear network with parameters ((A 2):,I, (A 1) I,: ) and input & output data pair (X :,J, Y :,J). Note that our on shallow linear networks in Section 2 are applicable to all parameter dimensions and data matrices. Thus, Theorem 1 fully characterizes the forms of critical points of L N in K I×J. Moreover, the existence of such critical points can be analytically examined by substituting their forms into eq.. In summary, we obtain the following , where we denote Σ J:= Y:,J X †:,J X:,J Y:,J with the full singular value decomposition U J Λ J U J, and suppose that Σ J has r(J) distinct positive singular values σ 1 (J) > · · · > σ r(J) (J) with multiplicities m 1,..., m r(J), respectively, andm(J) zero singular values. Proposition 6 (Characterization of critical points). All critical points of L N in K I×J for any I ⊂ {1, . . ., d 1}, J ⊂ {1, . . ., m} are necessarily and sufficiently characterized by an L 1 ∈ R |I|×d0, a block matrix V ∈ R d2×|I| and an invertible matrix C ∈ R |I|×|I| such that DISPLAYFORM3 DISPLAYFORM4 ×p consist of orthonormal columns with p i ≤ m i for i = 1,..., r(J),p ≤m such that DISPLAYFORM5 Moreover, a critical point in K I×J exists if and only if there exists such C, V, L 1 that DISPLAYFORM6 Other entries of A 1 X < 0.To further illustrate, we consider a special case where the nonlinear network has one unit in the hidden layer, i.e., d 1 = 1, in which case A 1 and A 2 are row and column vectors, respectively. Then, the entire parameter space can be partitioned into disjoint cones taking the form of K I×J, and I = {1} is the only nontrivial choice. We obtain the following from Proposition 6.Proposition 7 (Characterization of critical points). Consider L N with d 1 = 1 and any J ⊂ {1, . . ., m}. Then, any nonzero critical point of L N within K {1}×J can be necessarily and sufficiently characterized by an 1 ∈ R 1×d0, a block unit vector v ∈ R d2×1 and a scalar c ∈ R such that DISPLAYFORM7 Specifically, v is a unit vector that is supported on the entries corresponding to the same singular value of Σ J. Moreover, a nonzero critical point in K {1}×J exists if and only if there exist such c, v, 1 that satisfy DISPLAYFORM8 DISPLAYFORM9 We note that Proposition 7 characterizes both the existence and the forms of critical points of L N over the entire parameter space for nonlinear networks with a single hidden unit. The condition in eq. FORMULA24 is guaranteed because P ker(v) = 0 for v = 0.To further understand Proposition 7, suppose that there exists a critical point in K {1}×J with v being supported on the entries that correspond to the i-th singular value of Σ J. Then, Proposition 1 implies that DISPLAYFORM10 In particular, the critical point achieves the local minimum DISPLAYFORM11. This is because in this case the critical point is full rank with an optimal order, and hence corresponds to the global minimum of the linear network in eq.. Since the singular values of Σ J may vary with the choice of J, L N may achieve different local minima in different cones. Thus, local minimum that is not global minimum can exist for L N. The following proposition concludes this fact by considering a concrete example. Proposition 8. For one-hidden-layer nonlinear neural networks with ReLU activation function, there exists local minimum that is not global minimum, and there also exists local maximum. FORMULA13 and FORMULA19 hold if c −1 (v) 1,: ≥ 0, 1,: < 0. Similarly to the previous case, choosing c = 1, v =, 1 = (−1, 0) yields a local minimum that achieves the function value L n = 2. Hence, local minimum that is not global minimum does exist. Moreover, in the cone K I×J with I = {1}, J = ∅, the function L N remains to be the constant 5 2, and all points in this cone are local minimum or local maximum. Thus, the landscape of the loss function of nonlinear networks is very different from that of the loss function of linear networks. In this paper, we provide full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks, namely, shallow linear networks, deep linear networks, and shallow ReLU nonlinear networks. We show that such analytical forms of the critical points have direct implications on the values of the corresponding loss functions, achievement of global minimum, and various landscape properties around these critical points. As a consequence, the loss function for linear networks has no spurious local minimum, while such point does exist for nonlinear networks with ReLU activation. In the future, it is interesting to further explore nonlinear neural networks. In particular, we wish to characterize the analytical form of critical points for deep nonlinear networks and over the full parameter space. Such will further facilitate the understanding of the landscape properties around these critical points. Notations: For any matrix M, denote vec(M) as the column vector formed by stacking its columns. Denote the Kronecker product as "⊗". Then, the following useful relationships hold for any dimension compatible matrices M, U, V, W: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Recall that a point DISPLAYFORM4 DISPLAYFORM5 We first prove eqs. and. DISPLAYFORM6 Next, we derive the form of A 2. Recall the full singular value decomposition Σ = U ΛU, where Λ is a diagonal matrix with distinct singular values σ 1 >... > σ r > 0 and multiplicities m 1,..., m r, respectively. We also assume that there arem number of zero singular values in Λ. Using the fact that P col(A2) = U P col(U A2) U, the last equality in eq. reduces to DISPLAYFORM7 By the multiplicity pattern of the singular values in Λ, P col(U A2) must be block diagonal. Specifically, we can write P col(U A2) = diag(P 1, . . ., P r, P), where P i ∈ R mi×mi and P ∈ Rm ×m.Also, since P col(U A2) is a projection, P 1,..., P r, P must all be projections. Note that P col(U A2) has rank rank(A 2), and suppose that P 1,..., P r, P have ranks p 1,..., p r,p, respectively. Then, we must have p i ≤ m i for i = 1,..., r,p ≤m and r i=1 p i +p = rank(A 2). Also, note that each projection can be expressed as P i = V i V i with V i ∈ R mi×pi, V ∈ Rm ×p consisting of orthonormal columns. Hence, we can write P col(U A2) = V V where V = diag(V 1, . . ., V r, V). We then conclude that P col(A2) = U P col(U A2) U = U V V U. Thus, A 2 has the same column space as U V, and there must exist an invertible matrix DISPLAYFORM8 Then, plugging A † 2 = C −1 V U into eq. yields the desired form of A 1.We now prove eq.. Note that the above proof is based on the equations DISPLAYFORM9 Hence, the forms of A 1, A 2 in eqs. and need to further satisfy ∇ A2 L = 0. By eq. FORMULA19 and the form of A 2, we obtain that DISPLAYFORM10 This expression, together with the form of A 1 in eq., implies that DISPLAYFORM11 where (i) uses the fact that X † XX = X, (ii) uses the fact that the block pattern of V is compatible with the multiplicity pattern of the singular values in Λ, and hence V V ΛV = ΛV. On the other hand, we also obtain that DISPLAYFORM12 Thus, to satisfy ∇ A2 L = 0 in eq. FORMULA12, we require that DISPLAYFORM13 which is equivalent to DISPLAYFORM14 Lastly, note that (I − U V (U V) ) = P col(U V) ⊥, and (I − V V) = P ker(V), which concludes the proof. By expansion we obtain that L = DISPLAYFORM0. Consider any (A 1, A 2) that satisfies eq. FORMULA4, we have shown that such a point also satisfies eq., which further yields that DISPLAYFORM1 where (i) follows from the fact that Tr(P col(A2) Σ P col(A2) ) = Tr(P col(A2) Σ), and (ii) uses the fact that P col(A2) = U P col(U A2) U. In particular, a critical point (A 1, A 2) satisfies eq.. Moreover, using the form of the critical point A 2 = U V C, eq. FORMULA20 further becomes DISPLAYFORM2 where (i) is due to P col(V C) = P col(V) = V V, and (ii) utilizes the block pattern of V and the multiplicity pattern of Λ that are specified in Theorem 1.: Consider a critical point (A 1, A 2) with the forms given by Theorem 1. By choosing L 1 = 0, the condition in eq. FORMULA4 is guaranteed. Then, we can specify a critical point with any V that satisfies the block pattern specified in Theorem 1, i.e., we can choose any p i, i = 1,..., r,p such that p i ≤ m i for i = 1,..., r,p ≤m and DISPLAYFORM0 m i, the global minimum value is achieved by a full rank A 2 with rank(A 2) = min{d 2, d 1} and DISPLAYFORM1 That is, the singular values are selected in a decreasing order to minimize the function value.: If (A 2, A 1) is a global minimizer and min{d y, d} > r i=1 m i, the global minimum can be achieved by choosing p i = m i for all i = 1,..., r andp ≥ 0. In particular, we do not need a full rank A 2 to achieve the global minimum. For example, we can choose rank(A 2) = r i=1 m i < min{d y, d} with p i = m i for all i = 1,..., r andp = 0. We first prove item 1. Consider a non-optimal-order critical point (A 1, A 2). By Theorem 1, we can write A 2 = U V C where V = [diag(V 1, . . ., V r, V), 0] and V i, i = 1,..., r, V consist of orthonormal columns. Define the orthonormal block diagonal matrix Since (A 1, A 2) is a non-optimal-order critical point, there exists 1 ≤ i < j ≤ r such that p i < m i and p j > 0. Then, consider the following perturbation of U S for some > 0. DISPLAYFORM0 DISPLAYFORM1 with which we further define the perturbation matrix A 2 = M S V C. Also, let the perturbation matrix A 1 be generated by eq. with U ← M and V ← S V. Note that with this construction, (A 1, A 2) satisfies eq., which further implies eq. for (A 1, A 2), i.e., A 2 A 1 X = P col(A2) Y X † X. Thus, eq. holds for the point (A 1, A 2), and we obtain that DISPLAYFORM2 where the last equality uses the fact that S ΛS = Λ, as can be observed from the block pattern of S and the multiplicity pattern of Λ. Also, by the construction of M and the form of S V, a careful calculation shows that only the i, j-th diagonal elements of P col(S U M S V) have changed, i.e., DISPLAYFORM3 As the index i, j correspond to the singular values σ i, σ j, respectively, and σ i > σ j, one obtain that DISPLAYFORM4 Thus, the construction of the point (A 2, A 1) achieves a lower function value for any > 0. Letting → 0 and noticing that M is a perturbation of U S, the point (A 2, A 1) can be in an arbitrary neighborhood of (A 2, A 1). Lastly, note that rank(A 2) = rank(A 2). This completes the proof of item 1.Next, we prove item 2. Consider an optimal-order critical point (A 1, A 2). Then, A 2 must be non-full rank, since otherwise a full rank A 2 with optimal order corresponds to a global minimizer by Proposition 2. Since there exists some k ≤ r such that 0]. Using this expression, eq. yields that DISPLAYFORM5 DISPLAYFORM6 We now specify our perturbation scheme. Recalling the orthonormal matrix S defined in eq.. Then, we consider the following matrices for some 1, 2 > 0 DISPLAYFORM7 For this purpose, we need to utilize the condition of critical points in eq., which can be equivalently expressed as DISPLAYFORM8 (ii) ⇔ (CL 1) (rank(A2)+1):d1,: XY (I − U S :,1:(q−1) (U S :,1:(q−1) ) ) = 0where (i) follows by taking the transpose and then simplifying, and (ii) uses the fact that V = SS V = S:,1:(q−1) in the case of optimal-order critical point. Calculating the function value at (A 1, A 2), we obtain that DISPLAYFORM9. We next simplify the above three trace terms using eq.. For the first trace term, observe that DISPLAYFORM10 2 Tr(S :,q ΛS :,q) where (i) follows from eq. as S:,q is orthogonal to the columns of S:,1:(q−1). For the second trace term, we obtain that DISPLAYFORM11 = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY U V diag (U V diag) ) + 2Tr(1 2 U S :,q S :,q ΛSS V diag (U V diag) ) (i) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY U V diag (U V diag) ) + 2Tr(1 2 σ k U S :,q e q S V diag (U V diag) )(ii) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY U V diag (U V diag) ), where (i) follows from S:,q ΛS = σ k e q, and (ii) follows from e q S V diag = 0. For the third trace term, we obtain that 2Tr(P Y) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY ) + 2Tr(1 2 U S :,q (U S :,q) Σ) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY ) + 2Tr(1 2 S :,q ΛS :,q).Combining the expressions for the three trace terms above, we conclude that Consider a critical point (A 1, . . ., A) so that eq. FORMULA4 Observe that the product matrix A (,2) is equivalent to the class of matrices B 2 ∈ R min{d,...,d2}×d1.Consider a critical point (B 2, A 1) of the shallow linear network L:= The proof is similar to that for shallow linear networks. Consider a deep-non-optimal-order critical point (A 1, . . ., A), and define the orthonormal block matrix S k using the blocks of V k in a similar way as eq.. Then, A (l,k+2) takes the form A (l,k+2) = U k S k S k V k C k. Since A (l,k+2) is of non-optimal order, there exists i < j < r(k) such that p i (k) < m i (k) and p j (k) > 0. Thus, we perturb the j-th column of U k S k to be, and denote the ing matrix as M k.Then, we perturb A to be A = M k (U k S k) A so that A A (−1,k+2) = M k S k V k C k. Moreover, we generate A k+1 by eq. with U k ← M k, V k ← S k V k. Note that such construction satisfies eq., and hence also satisfies eq., which further yields that DISPLAYFORM0 With the above equation, the function value at this perturbed point is evaluated as DISPLAYFORM1 Then, a careful calculation shows that only the i, j-th diagonal elements of DISPLAYFORM2 have changed, and are Now consider a deep-optimal-order critical point (A 1, . . ., A). Note that with A (−2,1) fixed to be a constant, the deep linear network reduces to a shallow linear network with parameters (A, A −1). Since (A, A −1) is not a non-global minimum critical point of this shallow linear network and A is of optimal-order, we can apply the perturbation scheme in the proof of Proposition 3 to identify a perturbation (A, A −1) with rank(A) = rank(A) + 1 that achieves a lower function value. Consider any point in X D. Since A (,1) X = 0, we can scale the nonzero row, say, the i-th row (A) i,: A (−1,1) X properly in the same way as that in the proof of Proposition 3 to increase the function value. Lastly, item 1 and item 2 imply that every local minimum is a global minimum for these two types of critical points. Moreover, combining items 1,2 and 3, we conclude that every critical point of these two types in X D is a saddle point.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SysEexbRb
We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks.
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” , two biologically-plausible algorithms, proposed by and , relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry (SS) algorithm , which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs. We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These complement the study by and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures. Deep learning models today are highly successful in task performance, learning useful representations, and even matching representations in the brain BID26 BID24. However, it remains a contentious issue whether these models reflect how the brain learns. Core to the problem is the fact that backpropagation, the learning algorithm underlying most of today's deep networks, is difficult to implement in the brain given what we know about the brain's hardware BID2 however, see Hinton 2007). One main reason why backpropagation seems implausible in the brain is that it requires sharing of feedforward and feedback weights. Since synapses are unidirectional in the brain, feedforward and feedback connections are physically distinct. Requiring them to shared their weights, even as weights are adjusted during learning, seems highly implausible. One approach to addressing this issue is to relax the requirement for weight-symmetry in error backpropagation. Surprisingly, when the feedback weights share only the sign but not the magnitude of the feedforward weights BID16 or even when the feedback weights are random (but fixed) BID17, they can still guide useful learning in the network, with performance comparable to and sometimes even better than performance of backpropagation, on datasets such as MNIST and CIFAR. Here, we refer to these two algorithms, respectively, as "sign-symmetry" and "feedback alignment." Since weight symmetry in backpropagation is required for accurately propagating the derivative of the loss function through layers, the success of asymmetric feedback algorithms indicates that learning can be supported even by inaccurate estimation of the error derivative. In feedback alignment, the authors propose that the feedforward weights learn to align with the random feedback weights, thereby allowing feedback to provide approximate yet useful learning signals BID17.However, a recent paper by BID0 finds that feedback alignment and a few other biologically-plausible algorithms, including variants of target propagation, do not generalize to larger and more difficult problems such as ImageNet BID4 ) and perform much worse than backpropagation. Nevertheless, the specific conditions Bartunov et al. tested are somewhat restrictive. They only tested locally-connected networks (i.e., weight sharing is not allowed among convolution filters at different spatial locations), a choice that is motivated by biological plausibility but in practice limits the size of the network (without weight sharing, each convolutional layer needs much more memory to store its weights), making it unclear whether poor performance was attributable solely to the algorithm, or to the algorithm on those architectures.1 Second, Bartunov et al. did not test sign-symmetry, which may be more powerful than feedback alignment since signsymmetric feedback weights may carry more information about the feedforward weights than the random feedback weights used in feedback alignment. In this work, we re-examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using standard ConvNet architectures (i.e., ResNet-18, AlexNet, and RetinaNet). We find that sign-symmetry can in fact train networks on both tasks, achieving similar performance to backpropagation on ImageNet and reasonable performance on MS COCO. In addition, we test the use of backpropagation exclusively in the last layer while otherwise using feedback alignment, hypothesizing that in the brain, the classifier layer may not be a fully-connected layer and may deliver the error signal through some other unspecified mechanism. Such partial feedback alignment can achieve better performance (relative to backpropagation) than in BID0. Taken together, these extend previous findings and indicate that existing biologicallyplausible learning algorithms remain viable options both for training artificial neural networks and for modeling how learning can occur in the brain. Consider a layer in a feedforward neural network. Let x i denote the input to the i th neuron in the layer and y j the output of the j th neuron. Let W denote the feedforward weight matrix and W ij the connection between input x i and output y j. Let f denote the activation function. Then, Equation 1 describes the computation in the feedforward step. Now, let B denote the feedback weight matrix and B ij the feedback connection between output y j and input x i, and let f denote the derivative of the activation function f. Given the objective function E, the error gradient ∂E ∂xi calculated in the feedback step is described by Equation 2. DISPLAYFORM0 DISPLAYFORM1 Standard backpropagation requires B = W. Sign-symmetry BID16 relaxes the above symmetry requirement by letting B = sign(W), where sign(·) is the (elementwise) sign function. Feedback alignment BID17 uses a fixed random matrix as the feedback weight matrix B. Lillicrap et al. showed that through training, W is adjusted such that on average, e T W Be > 0, where e is the error in the network's output. This condition implies that the error correction signal Be lies within 90• of e T W, the error calculated by standard backpropagation. We implement both algorithms in PyTorch for convolutional and fully-connected layers and post the code at https://github.com/willwx/sign-symmetry. We trained ResNet-18 BID6 on ImageNet using 5 different training settings: 1) backpropagation; 2) sign-symmetry for convolutional layers and backpropagation for the last, fully-connected layer; 3) sign-symmetry for all (convolutional and fully-connected) layers; 4) feedback alignment for convolutional layers and backpropagation for the fully-connected layer; and 5) feedback alignment for all (convolutional and fully-connected) layers. In sign-symmetry, at each backward step, feedback weights were taken as the signs of the feedforward weights, scaled by the same scale λ used to initialize that layer. 2 In feedback alignment, feedback weights were initialized once at the beginning as random variables from the same distribution used to initialize that layer. For backpropagation, standard training parameters were used (SGD with learning rate 0.1, momentum 0.9, and weight decay 10 −4). For ResNet-18 with other learning algorithms, we used SGD with learning rate 0.05 3, while momentum and weight decay remain unchanged. For AlexNet with all learning algorithms, standard training parameters were used (SGD with learning rate 0.01, momentum 0.9, and weight decay 5 × 10 −4). We used a version of AlexNet BID13, as used in torchvision) which we slightly modified to add batch normalization BID9 before every nonlinearity and consequently removed dropout. For all experiments, we used a batch size of 256, a learning rate decay of 10-fold every 10 epochs, and trained for 50 epochs. BID12. Sign-symmetry performed nearly as well as backpropagation, while feedback alignment performed better than previously reported when backpropagation was used to train the last layer. In all cases, the network was able to learn (FIG0, TAB0). Remarkably, sign-symmetry only slightly underperformed backpropagation in this benchmark large dataset, despite the fact that signsymmetry does not accurately propagate either the magnitude or the sign of the error gradient. Hence, this is not predicted by the performance of signSGD BID1, where weight updates use the sign of the gradients, but gradients are still calculate accurately; or XNORNet BID22, where both feedforward and feedback weights are binary but symmetrical, so error backpropagation is still accurate. An intuitive explanation for this performance is that the skip-connections in ResNet help prevent the degradation of the gradient being passed through many layers of sign-symmetric feedback. However, sign-symmetry also performed similarly well to backpropagation in a (modified) AlexNet architecture, which did not contain skip connections. Therefore, skip-connections alone do not explain the performance of sign-symmetry. In addition, although its performance was considerably worse, feedback alignment was still able to guide better learning in the network than reported by BID0 Figure 3 ) if we use backpropagation in the last layer. This condition is not unreasonable since, in the brain, the classifier layer is likely not a soft-max classifier and may deliver error signals by a different mechanism. We also tested using backpropagation exclusively for the last layer in a network otherwise trained with sign-symmetry, but the effect on the performance was minimal. One possibility why sign-symmetry performed better than feedback alignment is that in sign-symmetry, the feedback weight always tracks the sign of the feedforward weight, which may reduce the burden on the feedforward weight to learn to align with the feedback weight. Finally, in BID16, Batch-Manhattan (BM) SGD was proposed as a way to stabilize training with asymmetric feedback algorithms. In our experience, standard SGD consistently worked better than BM for sign-symmetry, but BM may improve for feedback alignment. We have not comprehensively characterized the effects of BM since many factors like learning rate can affect the outcome. Future experiments are needed to draw stronger . Besides the ImageNet classification task, we examined the performance of sign-symmetry on the MS COCO object detection task. Object detection is more complex than classification and might therefore require more complicated network architecture in order to achieve high accuracy. Thus, in this experiment we assessed the effectiveness of sign-symmetry in training networks that were more complicated and difficult to optimize. We trained the state-of-the-art object detection network RetinaNet proposed by BID18 on the COCO trainval35k split, which consists of 80k images from train and 35k random images from the 40k-image val set. RetinaNet comprises a ResNet-FPN backbone, a classification subnet, and a bounding box regressing subnet. The network was trained with three different training settings: 1) backpropagation for all layers; 2) backpropagation for the last layer in both subnets and sign-symmetry for rest of the layers; 3) backpropagation for the last layer in both subnets and feedback alignment for rest of the layers. We used a backbone ResNet-18 pretrained on ImageNet to initialize the network. In all the experiments, the network was trained with SGD with an initial learning rate of 0.01, momentum of 0.9, and weight decay of 0.0001. We trained the network for 40k iterations with 8 images in each minibatch. The learning rate was divided by 10 at iteration 20k. The on COCO are similar to those on ImageNet, although the performance gap between SS and BP on COCO is slightly more prominent FIG1. A number of factors could have potentially contributed to this . We followed the Feature Pyramid Network (FPN) architecture design choices, optimizers, and hyperparameters reported by BID18; these choices are all optimized for use with backpropagation instead of sign-symmetry. Hence, the here represent a lowerbound on the performance of sign-symmetry for training networks on the COCO dataset. We ran a number of analyses to understand how sign-symmetry guides learning. BID17 show that with feedback alignment, the alignment angles between feedforward and feedback weights gradually decrease because the feedforward weights learn to align with the feedback weights. We asked whether the same happens in sign-symmetry by computing alignment angles as in BID17: For every pair of feedforward and feedback weight matrices, we flattened the matrices into vectors and computed the angle between the vectors. Interestingly, we found that during training, the alignment angles decreased for the last 3 layers but increased for the other layers (Figure 3a). In comparison, in the backpropagation-trained network (where sign(W) was not used in any way), the analogous alignment angle between W and sign(W) increased for all layers. One possible explanation for the increasing trend is that as the training progresses, the feedforward weights tend to become sparse. Geometrically, this means that feedforward vectors become more aligned to the standard basis vectors and less aligned with the feedback weight vectors, which always lie on a diagonal by construction. This explanation is consistent with the similarly increasing trend of the average kurtosis of the feedforward weights (Figure 3b), which indicates that values of the weights became more dispersed during training. Since the magnitudes of the feedforward weights were discarded when calculating the error gradients, we also looked at how sign-symmetry affected the size of the trained weights. Sign-symmetry and backpropagation ed in weights with similar magnitudes (Figure 3c). More work is needed to elucidate how sign-symmetry guides efficient learning in the network. Our indicate that biologically-plausible learning algorithms, specifically sign-symmetry and feedback alignment, are able to learn on ImageNet. This finding seemingly conflicts with the findings by BID0. Why do we come to such different ?First, Bartunov et al. did not test sign-symmetry, which is expected to be more powerful than feedback alignment, because it is a special case of feedback alignment that allows feedback weights to have additional information about feedforward weights. Indeed, on ImageNet, the performance of sign-symmetry approached that of backpropagation and exceeded the performance of feedback alignment by a wide margin. Another reason may be that instead of using standard ConvNets on ImageNet, Bartunov et al. only tested locally-connected networks. While the later is a more biologically plausible architecture, in practice, it is limited in size by the need to store separate weights Figure 3: a, During training with sign-symmetry, alignment angles between feedforward weights W and feedback weights sign(W) decreased in the last 3 layers but increased in early layers, whereas during training with backpropagation, the analogous alignment angles increased for all layers and were overall larger. b, Kurtosis of the feedforward weight matrices increased during training. c, The magnitudes of weights trained by sign-symmetry were similar to those trained by backpropagation. Line and shading, mean ± std for epoch 50.for each spatial location. This reduced model capacity creates a bottleneck that may affect the performance of feedback alignment (see , Supplementary Note 9). Finally, the performance of feedback alignment also benefited from the use of backpropagation in the last layer in our conditions. A major reason why backpropagation is considered implausible in the brain is that it requires exact symmetry of physically distinct feedforward and feedback pathways. Sign-symmetry and feedback alignment address this problem by relaxing this tight coupling of weights between separate pathways. Feedback alignment requires no relation at all between feedforward and feedback weights and simply depends on learning to align the two. Hence, it can be easily realized in the brain (for example, see , Supplementary Figure 3). However, empirically, we and others have found its performance to be not ideal on relatively challenging problems. Sign-symmetry, on the other hand, introduces a mild constraint that feedforward and feedback connections be "antiparallel": They need to have opposite directions but consistent signs. This can be achieved in the brain with two additional yet plausible conditions: First, the feedforward and feedback pathways must be specifically wired in this antiparallel way. This can be achieved by using chemical signals to guide specific targeting of axons, similar to how known mechanisms for specific wiring operate in the brain BID20 BID8. One example scheme of how this can be achieved is shown in Figure 4. While the picture in Figure 4a is complex, most of the complexity comes from the fact that units in a ConvNet produce inconsistent outputs (i.e., both positive and negative). If the units are consistent (i.e., producing exclusively positive or negative outputs), the picture simplifies to Figure 4b. Neurons in the brain are observed to be consistent, as stated by the so-called "Dale's Law" BID3 BID25. Hence, this constraint would have to be incorporated at some point in any biologically plausible network, and remains an important direction for future work. We want to remark that Figure 4 is meant to indicate the relative ease of wiring sign-symmetry in the brain (compared to, e.g., wiring a network capable of weight transport), not that the brain is known to be wired this way. Nevertheless, it represents a hypothesis that is falsifiable by experimental data, potentially in the near future. Related, a second desideratum is that weights should not change sign during training. While our current setting for sign-symmetry removes weight magnitude transport, it still implicitly relies on "sign transport." However, in the brain, the sign of a connection weight depends on the type of the 4 A paper from last year examined connectivity patterns within tissue sizes of approx. 500 microns and axon lengths of approx. 250 microns BID23; recent progress (fueled by deep learning) can trace axons longer than 1 mm, although the imaging of large brain volumes is still limiting. In comparison, in mice, adjacent visual areas (corresponding to stages of visual processing) are 0.5-several mms apart BID19, while in primates it is tens of millimeters. Thus, testing the reality of sign-symmetric wiring is not quite possible today but potentially soon to be. presynaptic neuron-e.g., glutamatergic (excitatory) or GABAergic (inhibitory)-a quality that is intrinsic to and stable for each neuron given existing evidence. Hence, if sign-symmetry is satisfied initially-for example, through specific wiring as just described-it will be satisfied throughout learning, and "sign transport" will not be required. Thus, evaluating the capacity of sign-fixed networks to learn is another direction for future work. Figure 4: The specific wiring required for sign-symmetric feedback can be achieved using axonal guidance by specific receptor-ligand recognition. Assume that an axon carrying ligand L X will only synapse onto a downstream neuron carrying the corresponding receptor R X. By expressing receptors and ligands in an appropriate pattern, an antiparallel wiring pattern can be established that supports sign-symmetric feedback. a, An example scheme. In this scheme, one inconsistent unit (i.e., a unit that produce both positive and negative outputs) in the network is implemented by three consistent biological neurons, so that each synapse is exclusively positive or negative. n input neurons orthogonal ligand-receptor pairs is sufficient to implement all possible connection patterns. b, An example scheme for implementing a signsymmetric network with consistent units. Only 2 orthogonal ligand-receptor pairs are needed to implement all possible connectivities in this case. These schemes represent falsifiable hypotheses, although they do not exclude other possible implementations. Another element of unclear biological reality, common to feedback alignment and sign-symmetry, is that the update of a synaptic connection (i.e., weight) between two feedforward neurons (A to B) depends on the activity in a third, feedback neuron C, whose activation represents the error of neuron B. One way it can be implemented biologically is for neuron C to connect to B with a constant and fixed weight. When C changes its value due to error feedback, it will directly induce a change of B's electric potential and thus of the postsynaptic potential of the synapse between A and B, which might lead to either Long-term Potentiation (LTP) or Long-term Depression (LTD) of synapse A-B.Biological plausibility of ResNet has been previously discussed by BID14, claiming that ResNet corresponds to an unrolled recurrent network in the visual cortex. However, it is unclear yet how backpropagation through time can be implemented in the brain. Biological plausibility of batch normalization has been discussed in BID15, where they addressed the issues with online learning (i.e., one sample at a time, instead of minibatch), recurrent architecture and consistent training and testing normalization statistics. Other biological constraints include removing weight-sharing in convolutional layers as in BID0, incorporating temporal dynamics as in BID17, using realistic spiking neurons, addressing the sample inefficiency general to deep learning, etc. We believe that these are important yet independent issues to the problem of weight transport and that by removing the latter, we have taken a meaningful step toward biological plausibility. Nevertheless, many steps remain in the quest for a truly plausible, effective, and empirically-verified model of learning in the brain. Recent work shows that biologically-plausible learning algorithms do not scale to challenging problems such as ImageNet. We evaluated sign-symmetry and re-evaluated feedback alignment on their effectiveness training ResNet and AlexNet on ImageNet and RetinaNet on MS COCO. We find that 1) sign-symmetry performed nearly as well as backpropagation on ImageNet, 2) slightly modified feedback alignment performed better than previously reported, and 3) both algorithms had reasonable performance on MS COCO with minimal hyperparameter tuning. Taken together, these indicate that biologically-plausible learning algorithms, in particular sign-symmetry, remain promising options for training artificial neural networks and modeling learning in the brain.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SygvZ209F7
Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet
We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning. Deep learning contains many differentiable algorithms for computing with learned representations. These representations form vector spaces, sometimes equipped with additional structure. A recent example is the Transformer in which there is a vector space V of value vectors and an inner product space H of query and key vectors. This structure supports a kind of messagepassing, where a value vector v j ∈ V derived from entity j is propagated to update an entity i with weight q i · k j, where q i ∈ H is a query vector derived from entity i, k j ∈ H is a key vector derived from entity j, and the inner product on H is written as a dot product. The Transformer therefore represents a relational inductive bias, where a relation from entity j to entity i is perceived to the extent that q i · k j is large and positive. However, the real world has structure beyond entities and their direct relationships: for example, the three blocks in Figure 1 are arranged in such a way that if either of the supporting blocks is removed, the top block will fall. This is a simple 3-way relationship between entities i, j, k that is complex to represent as a system of 2-way relationships. It is natural to make the hypothesis that such higher-order relationships are essential to extracting the full predictive power of data, across many domains. In accordance with this hypothesis, we introduce a generalisation of the Transformer architecture, the 2-simplicial Transformer, which incorporates both 2-and 3-way interactions. Mathematically, the key observation is that higher-order interactions between entities can be understood using algebras. This is nothing but Boole's insight (Boole, 1847) which set in motion the development of modern logic. In our situation, an appropriate algebra is the Clifford algebra Cl(H) of the space H of queries and keys, which contains that space H ⊆ Cl(H) and in which queries and keys can be multiplied. To represent a 3-way interaction we map each entity i to a triple (p i, l k) using a natural continuous function η: Cl(H) −→ R associated to the Z-grading of Cl(H). This scalar measures how strongly the network perceives a 3-way interaction involving i, j, k. In summary, the 2-simplicial Transformer learns how to represent entities in its environment as vectors v ∈ V, and how to transform those entities to queries and (pairs of) keys in H, so that the signals provided by the scalars q i · k j and η(p i l 1 j l 2 k) are informative about higher-order structure in the environment. As a toy example of higher-order structure, we consider the reinforcement learning problem in a variant of the BoxWorld environment from . The original BoxWorld is played on a rectangular grid populated by keys and locked boxes of varying colours, with the goal being to open the box containing the "Gem". In our variant of the BoxWorld environment, bridge BoxWorld, the agent must use two keys simultaneously to obtain the Gem; this structure in the environment creates many 3-way relationships between entities, including for example the relationship between the locked boxes j, k providing the two keys and the Gem entity i. This structure in the environment is fundamentally logical in nature, and encodes a particular kind of conjunction; see Appendix I. The architecture of our deep reinforcement learning agent largely follows and the details are given in Section 4. The key difference between our simplicial agent and the relational agent of is that in place of a standard Transformer block we use a 2-simplicial Transformer block. Our experiments show that the simplicial agent confers an advantage over the relational agent as an inductive bias in our reasoning task. Motivation from neuroscience for a simplicial inductive bias for abstract reasoning is contained in Appendix J. Our use of tensor products of value vectors is inspired by the semantics of linear logic in vector spaces (; ; ;) in which an algorithm with multiple inputs computes on the tensor product of those inputs, but this is an old idea in natural language processing, used in models including the second-order RNN (; ; ;), multiplicative RNN , Neural Tensor Network and the factored 3-way Restricted Boltzmann Machine , see Appendix A. Tensors have been used to model predicates in a number of neural network architectures aimed at logical reasoning . The main novelty in our model lies in the introduction of the 2-simplicial attention, which allows these ideas to be incorporated into the Transformer architecture. In this section we first review the definition of the ordinary Transformer block and then explain the 2-simplicial Transformer block. We distinguish between the Transformer architecture which contains a word embedding layer, an encoder and a decoder , and the Transformer block which is the sub-model of the encoder that is repeated. The fundamental idea, of propagating information between nodes using weights that depend on the dot product of vectors associated to those nodes, comes ultimately from statistical mechanics via the Hopfield network (Appendix B). The ordinary and 2-simplicial Transformer blocks define operators on sequences e 1,..., e N of entity representations. Strictly speaking the entities are indices 1 ≤ i ≤ N but we sometimes identify the entity i with its representation e i. The space of entity representations is denoted V, while the space of query, key and value vectors is denoted H. We use only the vector space structure on V, but H = R d is an inner product space with the usual dot product pairing (h, h) → h · h and in defining the 2-simplicial Transformer block we will use additional algebraic structure on H, including the "multiplication" tensor B: H ⊗ H −→ H of (used to propagate tensor products of value vectors) and the Clifford algebra of H (used to define the 2-simplicial attention). In the first step of the standard Transformer block we generate from each entity e i a tuple of vectors via a learned linear transformation E: V −→ H ⊕3. These vectors are referred to respectively as query, key and value vectors and we write Stated differently, In the second step we compute a refined value vector for each entity Finally, the new entity representation e i is computed by the application of a feedforward network g θ, layer normalisation and a skip connection Remark 2.1. In the introduction we referred to the idea that a Transformer model learns representations of relations. To be more precise, these representations are heads, each of which determines an independent set of transformations W Q, W K, W V which extract queries, keys and values from entities. Thus a head determines not only which entities are related (via W Q, W K) but also what information to transmit between them (via W V). In multiple-head attention with K heads, there are K channels along which to propagate information between every pair of entities, each of dimension dim(H)/K. More precisely, we choose a decomposition H = H 1 ⊕ · · · ⊕ H K so that and write To compute the output of the attention, we take a direct sum of the value vectors propagated along every one of these K channels, as in the formula In combinatorial topology the canonical one-dimensional object is the 1-simplex (or edge) j −→ i. Since the standard Transformer model learns representations of relations, we refer to this form of attention as 1-simplicial attention. The canonical two-dimensional object is the 2-simplex (or triangle) which we may represent diagrammatically in terms of indices i, j, k as In the 2-simplicial Transformer block, in addition to the 1-simplicial contribution, each entity e i is updated as a function of pairs of entities e j, e k using the tensor product of value vectors u j ⊗ u k and a probability distribution derived from a scalar triple product p i, l 1 j, l 2 k in place of the scalar product q i · k j. This means that we associate to each entity e i a four-tuple of vectors via a learned linear transformation E: V −→ H ⊕4, denoted We still refer to p i as the query, l 1 i, l 2 i as the keys and u i as the value. Stated differently, whose square is a polynomial in the pairwise dot products This scalar triple product has a simple geometric interpretation in terms of the volume of the tetrahedron with vertices 0, a, b, c. To explain, recall that the triangle spanned by two unit vectors a, b in R 2 has an area A which can be written in terms of the dot product of a and b. In three dimensions, the analogous formula involves the volume V of the tetrahedron with vertices given by unit vectors a, b, c, and the scalar triple product as shown in Figure 2. In general, given nonzero vectors a, b, c letâ,b,ĉ denote unit vectors in the same directions. Then we can by Lemma C.10(v) factor out the length in the scalar triple product Figure 2: The geometry of 1-and 2-simplicial attention. Left: the dot product in terms of the area A in R 2. Right: the triple product in terms of the volume V in R 3. so that a general scalar triple product can be understood in terms of the vector norms and configurations of three points on the 2-sphere. One standard approach to calculating volumes of such tetrahedrons is the cross product which is only defined in three dimensions. Since the space of representations H is high dimensional the natural framework for the triple scalar product a, b, c is instead the Clifford algebra of H (see Appendix C). For present purposes, we need to know that a, b, c attains its minimum value (which is zero) when a, b, c are pairwise orthogonal, and attains its maximum value (which is a b c) if and only if {a, b, c} is linearly dependent (Lemma C.10). Using the number p i, l k as a measure of the degree to which entity i is attending to (j, k), or put differently, the degree to which the network predicts the existence of a 2-simplex (i, j, k), the update rule for the entities when using purely 2-simplicial attention is where B: H ⊗ H −→ H is a learned linear transformation. Although we do not impose any further constraints, the motivation here is to equip H with the structure of an algebra; in this respect we model conjunction by multiplication, an idea going back to Boole (Boole, 1847). We compute multiple-head 2-simplicial attention in the same way as in the 1-simplicial case. To combine 1-simplicial heads (that is, ordinary Transformer heads) and 2-simplicial heads we use separate inner product spaces H 1, H 2 for each simplicial dimension, so that there are learned linear transformations ⊕4 and the queries, keys and values are extracted from an entity e i according to The update rule (for a single head in each simplicial dimension) is then: If there are K 1 heads of 1-simplicial attention and K 2 heads of 2-simplicial attention, then is modified in the obvious way using Without the additional layer normalisation on the output of the 2-simplicial attention we find that training is unstable. The natural explanation is that these outputs are constructed from polynomials of higher degree than the 1-simplicial attention, and thus computational paths that go through the 2-simplicial attention will be more vulnerable to exploding or vanishing gradients. The time complexity of 1-simplicial attention as a function of the number of entities is O(N 2) while the time complexity of 2-simplicial attention is O(N 3) since we have to calculate the attention for every triple (i, j, k) of entities. For this reason we consider only triples (i, j, k) where the base of the 2-simplex (j, k) is taken from a set of pairs predicted by the ordinary attention, which we view as the primary locus of computation. More precisely, we introduce in addition to the N entities (now referred to as standard entities) a set of M virtual entities e N +1,..., e N +M. These virtual entities serve as a "scratch pad" onto which the iterated ordinary attention can write representations, and we restrict j, k to lie in the range N < j, k ≤ N + M so that only value vectors obtained from virtual entities are propagated by the 2-simplicial attention. With virtual entities the update rule is for and for The updated representation e i is computed from v i, e i using as before. Observe that the virtual entities are not used to update the standard entities during 1-simplicial attention and the 2-simplicial attention is not used to update the virtual entities; instead the second summand in involves the vector u i = W U e i, which adds recurrence to the update of the virtual entities. After the attention phase the virtual entities are discarded. The method for updating the virtual entities is similar to the role of the memory nodes in the relational recurrent architecture of , the master node in (, §5.2) and memory slots in the Neural Turing Machine . The update rule has complexity O(N M 2) and so if we take M to be of order √ N we get the desired complexity O(N 2). The environment in our reinforcement learning problem is a variant of the BoxWorld environment from . The standard BoxWorld environment is a rectangular grid in which are situated the player (a dark gray tile) and a number of locked boxes represented by a pair of horizontally adjacent tiles with a tile of colour x, the key colour, on the left and a tile of colour y, the lock colour, on the right. There is also one loose key in each episode, which is a coloured tile not initially adjacent to any other coloured tile. All other tiles are blank (light gray) and are traversable by the player. The rightmost column of the screen is the inventory, which fills from the top and contains keys that have been collected by the player. The player can pick up any loose key by walking over it. In order to open a locked box, with key and lock colours x, y, the player must step on the lock while in possession of a copy of y, in which case one copy of this key is removed from the inventory and replaced by a key of colour x. The goal is to attain a white key, referred to as the Gem (represented by a white square) as shown in the sample episode of Figure 3. In this episode, there is a loose pink key (marked 1) which can be used to open one of two locked boxes, obtaining in this way either key 5 or key 2 1. The correct choice is 2, since this leads via the sequence of keys 3, 4 to the Gem. Some locked boxes, if opened, provide keys that are not useful for attaining the Gem. Since each key may only be used once, opening such boxes means the episode is rendered unsolvable. Such boxes are called distractors. An episode ends when the player either obtains the Gem (with a reward of +10) or opens a distractor box (reward −1). Opening any non-distractor box, or picking up a loose key, garners a reward of +1. The solution length is the number of locked boxes (including the one with the Gem) in the episode on the path from the loose key to the Gem. Our variant of the BoxWorld environment, bridge BoxWorld, is shown in Figure 4. In each episode two keys are now required to obtain the Gem, and there are therefore two loose keys on the board. To obtain the Gem, the player must step on either of the lock tiles with both keys in the inventory, at which point the episode ends with the usual +10 reward. Graphically, Gems with multiple locks are denoted with two vertical white tiles on the left, and the two lock tiles on the right. Two solution 1 The agent sees only the colours of tiles, not the numbers which are added here for exposition. paths (of the same length) leading to each of the locks on the Gem are generated with no overlapping colours, beginning with two loose keys. In episodes with multiple locks we do not consider distractor boxes of the old kind; instead there is a new type of distractor that we call a bridge. This is a locked box whose lock colour is taken from one solution branch and whose key colour is taken from the other branch. Opening the bridge renders the puzzle unsolvable. An episode ends when the player either obtains the Gem (reward +10) or opens the bridge (reward −1). Opening a box other than the bridge, or picking up a loose key, has a reward of +1 as before. In this paper we consider episodes with zero or one bridge (the player cannot fail to solve an episode with no bridge). Standard BoxWorld is straightforward for an agent to solve using relational reasoning, because leaves on the solution graph can be identified (their key colour appears only once on the board) and by propagating this information backwards along the arrows on the solution graph, an agent can identify distractors. Bridge BoxWorld emphasises reasoning about 3-way relationships (or 2-simplices). The following 2-simplex motifs appear in all solution graphs where a pair of boxes (α, β) is a source if they have the same lock colour but distinct key colours, and a sink if they have the same key colour but distinct lock colours (the 2-simplex leading to the Gem being an example). If α, β is a source or a sink then either α is the bridge or β is the bridge. If the agent can observe both a source and a sink then it can locate the bridge. It is less clear how to identify bridges using iterated relational reasoning, because every path in the solution graph eventually reaches the Gem. Our baseline relational agent is modeled closely on except that we found that a different arrangement of layer normalisations worked better in our experiments, see Remark 4.1. The code for our implementation of both agents is available online . In the following we describe the network architecture of both the relational and simplicial agent; we will note the differences between the two models as they arise. The input to the agent's network is an RGB image, represented as a tensor of shape where R is the number of rows and C the number of columns (the C + 1 is due to the inventory). This tensor is divided by 255 and then passed through a 2 × 2 convolutional layer with 12 features, and then a 2 × 2 convolutional layer with 24 features. Both activation functions are ReLU and the padding on our convolutional layers is "valid" so that the output has shape [R − 2, C − 1, 24]. We then multiply by a weight matrix of shape 24 × 62 to obtain a tensor of shape [R − 2, C − 1, 62]. Each feature vector has concatenated to it a twodimensional positional encoding, and then the is reshaped into a tensor of shape [N, 64] where N = (R − 2)(C − 1) is the number of Transformer entities. This is the list (e 1, . . ., e N) of entity representations e i ∈ V = R 64. In the case of the simplicial agent, a further two learned embedding vectors e N +1, e N +2 are added to this list; these are the virtual entities. So with M = 0 in the case of the relational agent and M = 2 for the simplicial agent, the entity representations form a tensor of shape [N + M, 64]. This tensor is then passed through two iterations of the Transformer block (either purely 1-simplicial in the case of the relational agent, or including both 1 and 2-simplicial attention in the case of the simplicial agent). In the case of the simplicial agent the virtual entities are then discarded, so that in both cases we have a sequence of entities e 1,..., e N. Inside each block are two feedforward layers separated by a ReLU activation with 64 hidden nodes; the weights are shared between iterations of the Transformer block. In the 2-simplicial Transformer block the input tensor, after layer normalisation, is passed through the 2-simplicial attention and the (after an additional layer normalisation) is concatenated to the output of the 1-simplicial attention heads before being passed through the feedforward layers. The pseudo-code for the ordinary and 2-simplicial Transformer blocks are: d e f t r a n s f o r m e r b l o c k (e): x = LayerNorm (e) a = 1 S i m p l i c i a l A t t e n t i o n (x) b = D e n s e L a y e r 1 (a) c = D e n s e L a y e r 2 (b) r = Add ([ e, c] ) e p r i m e = LayerNorm (r) r e t u r n e p r i m e d e f s i m p l i c i a l t r a n s f o r m e r b l o c k (e): x = LayerNorm (e) a1 = 1 S i m p l i c i a l A t t e n t i o n (x) a2 = 2 S i m p l i c i a l A t t e n t i o n (x) a2n = LayerNorm (a2) a c = C o n c a t e n a t e ([ a1, a2n] ) b = D e n s e L a y e r 1 (a c) c = D e n s e L a y e r 2 (b) r = Add ([ e, c] ) e p r i m e = LayerNorm (r) r e t u r n e p r i m e Our implementation of the standard Transformer block is based on an implementation in Keras from . In both the relational and simplicial agent, the space V of entity representations has dimension 64 and we denote by H 1, H 2 the spaces of 1-simplicial and 2-simplicial queries, keys and values. In both the relational and simplicial agent there are two heads of 1-simplicial attention, In the simplicial agent there is a single head of 2-simplicial attention with dim(H 2) = 48 and two virtual entities. The output of the Transformer blocks is a tensor of shape [N, 64]. To this final entity tensor we apply max-pooling over the entity dimension, that is, we compute a vector v ∈ R 64 by the rule v i = max 1≤j≤N (e j) i for 1 ≤ i ≤ 64. This vector v is then passed through four fully-connected layers with 256 hidden nodes and ReLU activations. The output of the final fully-connected layer is multiplied by one 256 × 4 weight matrix to produce logits for the actions (left, up, right and down) and another 256 × 1 weight matrix to produce the value function. Remark 4.1. There is wide variation in the layer normalisation in Transformer models, compare (; ;). In layer normalisation occurs in two places: on the concatenation of the Q, K, V matrices, and on the output of the feedforward network g θ. We keep this second normalisation but move the first from after the linear transformation E of to before this linear transformation, so that it is applied directly to the incoming entity representations. This ordering gave the best performant relational model in our experiments, with our diverging even further if a direct comparison to the architecture was used. The training of our agents uses the implementation in Ray RLlib of the distributed off-policy actor-critic architecture IMPALA of with optimisation algorithm RMSProp. The hyperparameters for IMPALA and RMSProp are given in Table 1 of Appendix E. Following and other recent work in deep reinforcement learning, we use RMSProp with a large value of the hyperparameter ε = 0.1. As we explain in Appendix G, this is effectively RMSProp with smoothed gradient clipping. First we verified that our implementation of the relational agent solves the BoxWorld environment with a solution length sampled from and number of distractors sampled from on a 9 × 9 grid. After training for 2.35 × 10 9 timesteps our implementation solved over 93% of puzzles (regarding the discrepancy with the reported sample complexity in see Appendix D). Next we trained the relational and simplicial agent on bridge BoxWorld, under the following conditions: half of the episodes contain a bridge, the solution length is uniformly sampled from (both solution paths are of the same length), colours are uniformly sampled from a set of 20 colours and the boxes and loose keys are arranged randomly on a 7 × 9 grid, under the constraint that the box containing the Gem does not occur in the rightmost column or bottom row, and keys appear only in positions (y, x) = (2r, 3c − 1) for 1 ≤ r ≤ 3, 1 ≤ c ≤ 3. The starting and ending point of the bridge are uniformly sampled with no restrictions (e.g. the bridge can involve the colours of the loose keys and locks on the Gem) but the lock colour is always on the top solution path. There is no curriculum and no cap on timesteps per episode. We trained four independent trials of both agents to either 5.5 × 10 9 timesteps or convergence, whichever came first. In Figure 6 we give the mean and standard deviation of these four trials, showing a clear advantage of the simplicial agent. We make some remarks about performance comparisons taking into account the fact that the relational agent is simpler (and hence faster to execute) than the simplicial agent in Appendix D. The training runs for the relational and simplicial agents are shown in Figure 9 and Figure 10 of Appendix F, together with analysis and visualization of the 1-and 2-simplicial attention in specific examples. In the reported experiments we use only two Transformer blocks; we performed two trials of a relational agent using four Transformer blocks, but after 5.5 × 10 9 timesteps neither trial exceeded the 0.85 plateau in terms of fraction solved. Our overall therefore suggest that the 2-simplicial Transformer is more powerful than the standard Transformer, with its performance not matched by adding greater depth. This is further supported by the fact on a time-adjusted basis, the 2-simplicial model still converges faster than the ordinary model; see Figure 8 of Appendix D. We analyse the simplicial agent to establish that it has learned to use the 2-simplicial attention, and to provide some intuition for why 2-simplices are useful; additional details are in Appendix F. The analysis is complicated by the fact that our 2 × 2 convolutional layers (of which there are two) are not padded, so the number of entities processed by the Transformer blocks is (R − 2)(C − 1) where the original game board is R × C and there is an extra column for the inventory (here R is the number of rows). This means there is not a one-to-one correspondence between game board tiles and entities; for example, all the experiments reported in Figure 6 are on a 7 × 9 board, so that there are N = 40 Transformer entities which can be arranged on a 5 × 8 grid (information about this grid is passed to the Transformer blocks via the positional encoding). Nonetheless we found that for trained agents there is a strong relation between a tile in position (y, x) and the Transformer entity with index This correspondence is presumed in the following analysis, and in our visualisations. Displayed in Figure 7 are attention distributions for simplicial agent A of Figure 10. The four images in the top right show the ordinary attention of the virtual entities in the first iteration of the simplicial Transformer block: in the first head, the first virtual entity attends strongly to a particular lock, while the second head of the second virtual entity attends strongly to the corresponding key. Shown at the bottom of Figure 7 is the 2-simplicial attention in the second iteration of the simplicial Transformer block. The columns are query entities i and rows are key entity pairs (j, k) in lexicographic order,,,. Entity 17 is the top lock on the Gem, 25 is the bottom lock on the Gem, 39 is the player. We may therefore infer, from our earlier description of the ordinary attention of the virtual entities, that the agent "perceives" the 2-simplex with query entity 25 as shown. In general we observe that the top and bottom locks on the Gem, the player, and the entities 7, 15 associated to the inventory often have a non-generic 2-simplicial attention, which strongly suggests that the simplicial agent has learned to use 2-simplices in a meaningful way. Figure 7: Visualization of 2-simplicial attention in step 18 of an episode. On general grounds one might expect that in the limit of infinite experience, any reinforcement learning agent with a sufficiently deep neural network will be able to solve any environment, in-cluding those like bridge BoxWorld that involve higher-order relations between entities. In practice, however, we do not care about the infinite computation limit. In the regime of bounded computation it is reasonable to introduce biases towards learning representations of structures that are found in a wide range of environments that we consider important. We argue that higher-order relations between entities are an important example of such structures, and that the 2-simplicial Transformer is a natural inductive bias for 3-way interactions between entities. We have given preliminary evidence for the utility of this bias by showing that in the bridge BoxWorld environment the simplicial agent has better performance than a purely relational agent, and that this performance involves in a meaningful way the prediction of 3-way interactions (or 2-simplices). We believe that simplicial Transformers may be useful for any problem in which higher-order relations between entities are important. The long history of interactions between logic and algebra is a natural source of inspiration for the design of inductive biases in deep learning. In this paper we have exhibited one example: Boole's idea, that relationships between entities can be modeled by multiplication in an algebra, may be realised in the context of deep learning as an augmentation to the Transformer architecture using Clifford algebras of spaces of representations. The Transformer model and descendents such as the Universal Transformer can be viewed as general units for computing with learned representations; in this sense they have a similar conceptual role to the Neural Turing Machine (NTM) and Differentiable Neural Computer . As pointed out in (, §4) one can view the Transformer as a block of parallel RNNs (one for each entity) which update their hidden states at each time step by attending to the sequence of hidden states of the other RNNs at the previous step. We expand on those remarks here in order to explain the connection between the 2-simplicial Transformer and earlier work in the NLP literature, which is written in terms of RNNs. We consider a NTM with content-based addressing only and no sharpening. The core of the NTM is an RNN controller with update rule where W, U, b are weight matrices, x is the current input symbol, h is the previous hidden state, h is the next hidden state and M is the output of the memory read head where there are N memory slots containing M 1,... M N, q is a query generated from the hidden state of the RNN by a weight matrix q = Zh, and We omit the mechanism for writing to the memory here, since it is less obvious how that relates to the Transformer; see (, §3.2). Note that while we can view M j as the "hidden state" of memory slot j, the controller's hidden state and the hidden states of the memory slots play asymmetric roles, since the former is updated with a feedforward network at each time step, while the latter is not. The Transformer with shared transition functions between layers is analogous to a NTM with this asymmetry removed: there is no longer a separate recurrent controller, and every memory slot is updated with a feedforward network in each timestep. To explain, view the entity representations e 1,..., e N of the Transformer as the hidden states of N parallel RNNs. The new representation is where the attention term is and q i = Ze i is a query vector obtained by a weight matrix from the hidden state, the k j = Ke j are key vectors and v j = V e j is the value vector. Note that in the Transformer the double role of M j in the NTM has been replaced by two separate vectors, the key and value, and the cosine similarity K[−, −] has been replaced by the dot product. Having now made the connection between the Transformer and RNNs, we note that the second-order RNN (; ; ;) and the similar multiplicative RNN have in common that the update rule for the hidden state of the RNN involves a term V (x ⊗ h) which is a linear function of the tensor product of the current input symbol x and the current hidden state h. One way to think of this is that the weight matrix V maps inputs x to linear operators on the hidden state. In the update rule contains a term V (e 1 ⊗ e 2) where e 1, e 2 are entity vectors, and this is directly analogous to our construction. The continuous Hopfield network (, Ch.42) with N nodes updates in each timestep a sequence of vectors by the rules for some parameter η. The Transformer block may therefore be viewed as a refinement of the Hopfield network, in which the three occurrences of entity vectors in are replaced by query, key and value vectors W Q e i, W K e j, W V e j respectively, the nonlinearity is replaced by a feedforward network with multiple layers, and the dynamics are stabilised by layer normalisation. The initial representations e i also incorporate information about the underlying lattice, via the positional embeddings. The idea that the structure of a sentence acts to transform the meaning of its parts is due to Frege (Frege, 1892) and underlies the denotational semantics of logic. From this point of view the Transformer architecture is an inheritor both of the logical tradition of denotational semantics, and of the statistical mechanics tradition via Hopfield networks. The volume of an n-simplex in R n with vertices at 0, v 1,..., v n is which is 1 n! times the volume of the n-dimensional parallelotope which shares n edges with the nsimplex. In our applications the space of representations H is high dimensional, but we wish to speak of the volume of k-simplices for k < dim(H) and use those volumes to define the coefficients of our simplicial attention. The theory of Clifford algebras is one appropriate framework for such calculations. Let H be an inner product space with pairing (v, w) → v · w. The Clifford algebra Cl(H) is the associative unital R-algebra generated by the vectors v ∈ H with relations vw + wv = 2(v · w) · 1. The canonical k-linear map H −→ Cl(H) is injective, and since v 2 = v 2 · 1 in Cl(H), any nonzero vector v ∈ H is a unit in the Clifford algebra. While as an algebra Cl(H) is only Z 2 -graded, there is nonetheless a Z-grading of the underlying vector space which can be defined as follows: let {e i} n i=1 be an orthonormal basis of H, then the set is a basis for Cl(H), with m ranging over the set {0, . . ., n}. If we assign the basis element e i1 · · · e im the degree m, then this determines a Z-grading [−] k of the Clifford algebra which is easily checked to be independent of the choice of basis. Definition C.1. [A] k denotes the homogeneous component of A ∈ Cl(H) of degree k. There is an operation on elements of the Clifford algebra called reversion in geometric algebra (, p.45) which arises as follows: the opposite algebra Cl(H) op admits a k-linear map j: H −→ Cl(H) op with j(v) = v which satisfies j(v)j(w) + j(w)j(v) = 2(v · w) · 1, and so by the universal property there is a unique morphism of algebras op which restricts to the identity on H..., v k ∈ H and (−) † is homogeneous of degree zero with respect to the Z-grading. Using this operation we can define the magnitude (, p.46) of any element of the Clifford algebra. and in particular for v ∈ H we have |v| = v. Lemma C.4. Set n = dim(H). Then for A ∈ Cl(H) we have Proof. See (, Chapter 2 (1.33)). Example C.5. For a, b, c ∈ H the lemma gives Remark C.6. Given vectors v 1,..., v k ∈ H the wedge product v 1 ∧ · · · ∧ v k is an element in the exterior algebra H. Using the chosen basis B we can identify the underlying vector space of Cl(H) with H and using this identification (set where S k is the permutation group on k letters. That is, the top degree piece of v 1 · · · v k in Cl(H) is always the wedge product. It is then easy to check that the squared magnitude of this wedge product is The term in the innermost bracket is the determinant of the k × k submatrix with columns j = (j 1, . . ., j k) and in the special case where k = n = dim(H) we see that the squared magnitude is just the square of the determinant of the matrix (λ ij) 1≤i,j≤n. The wedge product of k-vectors in H can be thought of as an oriented k-simplex, and the magnitude of this wedge product in the Clifford algebra computes the volume. Definition C.7. The volume of a k-simplex in H with vertices 0, v 1,..., v k is Definition C.8. Given v 1,..., v k ∈ H the k-fold unsigned scalar product is By Lemma C.4 and we have which gives the desired generalisation of the equations in Figure 2. Example C.9. For k = 2 the unsigned scalar product is the absolute value of the dot product, a, b = |a · b|. For k = 3 we obtain the formulas of Definition 2.2, from which it is easy to check that a, b, c = a b c cos 2 θ ab + cos 2 θ bc + cos 2 θ ac − 2 cos θ ab cos θ ac cos θ bc where θ ac, θ ab, θ ac are the angles between a, b, c. The geometry of the three-dimensional case is more familiar: if dim(H) = 3 then |[abc] 3 | is the absolute value of the determinant by, so that Vol 3 = With these formulas in mind the geometric content of the following lemma is clear: (ii) If the v i are all pairwise orthogonal then v 1,..., v k = 0. (iii) The set {v 1, . . ., v k} is linearly dependent if and only if v 1,..., (v) For λ 1,..., λ k ∈ R, we have For more on simplicial methods in the context of geometric algebra see (; The experiments in the original BoxWorld paper contain an unreported cap on timesteps per episode (an episode horizon) of 120 timesteps . We have chosen to run our experiments without an episode horizon, and since this means our reported sample complexities diverge substantially from the original paper (some part of which it seems reasonable to attribute to the lack of horizon) it is necessary to justify this choice. When designing an architecture for deep reinforcement learning the goal is to reduce the expected generalisation error (, §8.1.1) with respect to some class of similar environments. Although this class is typically difficult to specify and is often left implicit, in our case the class includes a range of visual logic puzzles involving spatial navigation, which can be solved without memory 2. A learning curriculum undermines this goal, by making our expectations of generalisation conditional on the provision of a suitable curriculum, whose existence for a given member of the problem class may not be clear in advance. The episode horizon serves as a de facto curriculum, since early in training it biases the distribution of experience rollouts towards the initial problems that an agent has to solve (e.g. learning to pick up the loose key). In order to avoid compromising our ability to expect generalisation to similar puzzles which do not admit such a useful curriculum, we have chosen not to employ an episode horizon. Fortunately, the relational agent performs well even without a curriculum on the original BoxWorld, as our show. In Figure 6 of Section 5, the horizontal axis was environment steps. However, since the simplicial agent has a more complex model, each environment step takes longer to execute and the gradient descent steps are slower. In a typical experiment run on the GCP configuration, the training throughput of the relational agent is 1.9 × 10 4 environment frames per second (FPS) and that of the simplicial agent is 1.4 × 10 4 FPS. The relative performance gap decreases as the GPU memory and the number of IMPALA workers are increased, and this is consistent with the fact that the primary performance difference appears to be the time taken to compute the gradients (35ms vs 80ms). In Figure 8 we give the time-adjusted performance of the simplicial agent (the graph for the relational agent is as before) where the x-axis of the graph of the simplicial agent is scaled by 1.9/1.4. In principle there is no reason for a significant performance mismatch: the 2-simplicial attention can be run in parallel to the ordinary attention (perhaps with two iterations of the 1-simplicial attention per iteration of the 2-simplicial attention) so that with better engineering it should be possible to reduce this gap. 2 The bridge is the unique box both of whose colours appear three times on the board. However, this is not a reliable strategy for detecting bridges for an agent without memory, because once the agent has collected some of the keys on the board, some of the colours necessary to make this deduction may no longer be present. Our experiments involve only a small number of virtual entities, and a small number of iterations of the Transformer block: it is possible that for large numbers of virtual entities and iterations, our choices of layer normalisation are not optimal. Our aim was to test the viability of the simplicial Transformer starting with the minimal configuration, so we have also not tested multiple heads of 2-simplicial attention. Deep reinforcement learning is notorious for poor reproducibility , and in an attempt to follow the emerging best practices we are releasing our agent and environment code, trained agent weights, and training notebooks . The training runs for the relational and simplicial agents are shown in Figure 9 and Figure 10 respectively. In this Appendix we provide further details relating to the analysis of the attention of the trained simplicial agent in Section 6. Across our four trained simplicial agents, the roles of the virtual entities and heads vary: the following comments are all in the context of the best simplicial agent (simplicial agent A of Figure 10) but we observe similar patterns in the other trials. The standard entities are now indexed by 0 ≤ i ≤ 39 and virtual entities by i = 40, 41. In the first iteration of the 2-simplicial Transformer block, the first 1-simplicial head appears to propagate information about the inventory. At the beginning of an episode the attention of each standard entity is distributed between entities 7, 15, 23, 31 (the entities in the rightmost column), it concentrates sharply on 7 (the entity closest to the first inventory slot) after the acquisition of the first loose key, and sharply on 7, 15 after the acquisition of the second loose key. The second 1-simplicial head seems to acquire the meaning described in , where tiles of the same colour attend to one another. A typical example is shown in Figure 11. The video of this episode is available online . Figure 11: Visualisation of 1-simplicial attention in first Transformer block, between standard entities in heads one and two. The vertical axes on the second and third images are the query index 0 ≤ i ≤ 39, the horizontal axes are the key index 0 ≤ j ≤ 39. The standard entities are updated using 2-simplices in the first iteration of the 2-simplicial Transformer block, but this is not interesting as initially the virtual entities are learned embedding vectors, containing no information about the current episode. So we restrict our analysis to the 2-simplicial attention in the second iteration of the Transformer block. For the analysis, it will be convenient to organise episodes of bridge BoxWorld by their puzzle type, which is the tuple (a, b, c) where 1 ≤ a ≤ 3 is the solution length, 1 ≤ b ≤ a is the bridge source and a + 1 ≤ c ≤ 2a is the bridge target, with indices increasing with the distance from the gem. The episodes in Figures 4 and 7 have type. Figure 12: Visualisation of the 2-simplicial attention in the second Transformer block in step 13 of an episode of puzzle type. Entity 1 is the top lock on the Gem, 15 is associated with the inventory, 36 is the lock directly below the player. Shown is a 2-simplex with target 15. Figure 13: Visualisation of the 2-simplicial attention in the second Transformer block in step 29 of an episode of puzzle type. Entity 7 is associated with the inventory, 17 is the player. Shown is a 2-simplex with target 17. To give more details we must first examine the content of the virtual entities after the first iteration, which is a function of the 1-simplicial attention of the virtual entities in the first iteration. In Figures 7, 12, 13 we show these attention distributions multiplied by the pixels in the region [1, R − 2] × [1, C − 1] of the original board, in the second and third columns of the second and third rows. Let f 1 = e 40 and f 2 = e 41 denote the initial representations of the first and second virtual entities, before the first iteration. We use the index z ∈ {1, 2} to stand for a virtual entity. In the first iteration the representations are updated by to where the sum is over all entities α, the a z α are the attention coefficients of the first 1-simplicial head and the coefficients b z α are the attention of the second 1-simplicial head. Writing 0 1, 0 2 for the zero vector in H 1 1, H 1 2 respectively, this can be written as For a query entity i the vector propagated by the 2-simplicial part of the second iteration has the following terms, where Here A i j,k is the 2-simplicial attention with logits p i, l ) is the ith column in our visualisations of the 2-simplicial attention, so in the situation of Figure 7 with i = 25 we have A 25 1,2 ≈ 1 and hence the output of the 2-simplicial head used to update the entity representation of the bottom lock on the Gem is approximately B(f 1 ⊗ f 2). If we ignore the layer normalisation, feedforward network and skip connection in then f 1 ≈ v 1 ⊕ 0 2 and f 2 ≈ 0 1 ⊕ v 0 so that the output of the 2-simplicial head with target i = 25 is approximately Following Boole (Boole, 1847) and Girard it is natural to read the "product" as a conjunction (consider together the entity 1 and the entity 0) and the sum in as a disjunction. An additional layer normalisation is applied to this vector, and the is concatenated with the incoming information for entity 25 from the 1-simplicial attention, before all of this is passed through to form e 25. Given that the output of the 2-simplicial head is the only nontrivial difference between the simplicial and relational agent (with a transformer depth of two, the first 2-simplicial Transformer block only updates the standard entities with information from embedding vectors) the performance differences reported in Figure 6 suggest that this output is informative about avoiding bridges. In the training curves of the agents of Figure 9 and Figure 10 we observe a common plateau at a win rate of 0.85. In Figure 14 we show the per-puzzle win rate of simplicial agent A and relational agent A, on puzzles. These graphs make clear that the transition of both agents to the plateau at 0.85 is explained by solving the type (and to a lesser degree by progress on all puzzle types with b = 1). In Figure 14 and Figure 15 we give the per-puzzle win rates for a small sample of other puzzle types. Shown are the mean and standard deviation of 100 runs across various checkpoints of simplicial agent A and relational agent A. As originally presented in the optimisation algorithm RMSProp is a mini-batch version of Rprop, where instead of dividing by a different number in every mini-batch 3, 3, 5),. (namely, the absolute value of the gradient) we force this number to be similar for adjacent minibatches by keeping a moving average of the square of the gradient. In more detail, one step Rprop is computed by the algorithm κ is the learning rate, x i is a weight, g i is the associated gradient and ε is a small constant (the TensorFlow default value is 10 −10) added for numerical stability. The idea of Rprop is to update weights using only the sign of the gradient: every weight is updated by the same absolute amount κ in each step, with only the sign g i / √ r i = g i /|g i | of the update varying with i. The algorithm RMSprop was introduced as a refinement of Rprop: p is the decay rate (in our experiments the value is 0.99). Clearly Rprop is the p → 0 limit of RMSprop. For further see (, §8.5 In recent years there has been a trend in the literature towards using RMSprop with large values of the hyperparameter ε. For example in RMSProp is used with ε = 0.1, which is also one of the range of values in (, . This "large ε RMSProp" seems to have originated in (, §8). To understand what large ε RMSProp is doing, let us rewrite the algorithm as where S is the sigmoid S(u) = u/ √ 1 + u 2 which asymptotes to 1 as u → +∞ and is wellapproximated by the identity function for small u. We see a new multiplicative factor S(r i /ε) in the optimisation algorithm. Note that √ r i is a moving average of |g i |. Recall the original purpose of Rprop was to update weights using only the sign of the gradient and the learning rate, namely κg i / √ r i. The new S factor in the above reinserts the size of the gradient, but scaled by the sigmoid to be in the unit interval. In the limit ε → 0 we squash the outputs of the sigmoid up near 1 and the standard conceptual description of RMSProp applies. But as ε → 1 the sigmoid S(√ r i) has the effect that for large stable gradients we get updates of size κ and for small stable gradients we get updates of the same magnitude as the gradient. In , large ε RMSprop is a form of RMSprop with smoothed gradient clipping (, §10.11.1). It is no simple matter to define logical reasoning nor to recognise when an agent (be it an animal or a deep reinforcement learning agent) is employing such reasoning . We therefore begin by returning to Aristotle, who viewed logic as the study of general patterns by which one could distinguish valid and invalid forms of philosophical argumentation; this study having as its purpose the production of strategies for winning such argumentation games (; ;). In this view, logic involves • two players with one asserting the truth of a proposition and attempting to defend it, and the latter asserting its falsehood and attempting to refute it, and an • observer attempting to learn the general patterns which are predictive of which of the two players will win such a game given some intermediate state. Suppose we observe over a series of games 4 that a player is following an explicit strategy which has been distilled from general patterns observed in a large distribution of games, and that by following this strategy they almost always win. A component of that explicit strategy can be thought of as logical reasoning to the degree that it consists of rules that are independent of the particulars of the game (, §11.25). The problem of recognising logical reasoning in behaviour is therefore twofold: the strategy employed by a player is typically implicit, and even if we can recognise explicit components of the strategy, in practice there is not always a clear way to decide which rules are domain-specific. In mathematical logic the idea of argumentation games has been developed into a theory of mathematical proof as strategy in the game semantics of linear logic where one player (the prover) asserts a proposition G and the other player (the refuter) interrogates this assertion. Published as a conference paper at ICLR 2020 Consider a reinforcement learning problem in which the deterministic environment encodes G together with a multiset of hypotheses Γ which are sufficient to prove G. Such a pair is called a sequent and is denoted Γ G. The goal of the agent (in the role of prover) is to synthesise a proof of G from Γ through a series of actions. The environment (in the role of refuter) delivers a positive reward if the agent succeeds, and a negative reward if the agent's actions indicate a commitment to a line of proof which cannot possibly succeed. Consider a deep reinforcement learning agent with a policy network parametrised by a vector of weights w ∈ R D and a sequence of full-episode rollouts of this policy in the environment, each of which either ends with the agent constructing a proof (prover wins) or failing to construct a proof (refuter wins) with the sequent Γ G being randomly sampled in each episode. Viewing these episodes as instances of an argumentation game, the goal of Aristotle's observer is to learn from this data to predict, given an intermediate state of some particular episode, which actions by the prover will lead to success (proof) or failure (refutation). As the reward is correlated with success and failure in this sense, the goal of the observer may be identified with the training objective of the action-value network underlying the agent's policy, and we may identify the triple player, opponent, observer with the triple agent, environment and optimisation process. If this process succeeds, so that the trained agent wins in almost every episode, then by definition the weights w are an implicit strategy for proving sequents Γ G. This leads to the question: is the deep reinforcement learning agent parametrised by w performing logical reasoning? We would have no reason to deny that logical reasoning is present if we were to find, in the weights w and dynamics of the agent's network, an isomorphic image of an explicit strategy that we recognise as logically correct. In general, however, it seems more useful to ask to what degree the behaviour is governed by logical reasoning, and thus to what extent we can identify an approximate homomorphic image in the weights and dynamics of a logically correct explicit strategy. Ultimately this should be automated using "logic probes" along the lines of recent developments in neural network probes (; ; ; ;). The design of the BoxWorld environment was intended to stress the planning and reasoning components of an agent's policy (, p.2) and for this reason it is the underlying logical structure of the environment that is of central importance. To explain the logical structure of BoxWorld and bridge BoxWorld we introduce the following notation: given a colour c, we use C to stand for the proposition that a key of this colour is obtainable. Each episode expresses its own set of basic facts, or axioms, about obtainability. For instance, a loose key of colour c gives C as an axiom, and a locked box requiring a key of colour c in order to obtain a key of colour d gives an axiom that at first glance appears to be the implication C −→ D of classical logic. However, since a key may only be used once, this is actually incorrect; instead the logical structure of this situation is captured by the linear implication C D of linear logic . With this understood, each episode of the original BoxWorld provides in visual form a set of axioms Γ such that a strategy for obtaining the Gem is equivalent to a proof of Γ G in intuitionistic linear logic, where G stands for the proposition that the Gem is obtainable. There is a general correspondence in logic between strategies and proofs which we recall in Appendix I. To describe the logical structure of bridge BoxWorld we need to encode the fact that two keys (say a green key and a blue key) are required to obtain the Gem. Once again, it is the linear conjunction ⊗ of linear logic (also called the tensor product) rather than the conjunction of classical logic that properly captures the semantics. The axioms Γ encoded in an episode of bridge BoxWorld contain a single formula of the form X 1 ⊗ X 2 G where x 1, x 2 are the colours of the keys on the Gem, and again a strategy is equivalent to a proof of Γ G. In , the logical structure of the original BoxWorld consists of a fragment of linear logic containing only the connective, while bridge BoxWorld captures a slightly larger fragment containing and ⊗. Next we explain the correspondence between agent behaviour in bridge BoxWorld and proofs in linear logic. For an introduction to linear logic tailored to the setting of games see (, Ch.2). Recall that to each colour c we have associated a proposition C which can be read as "the key of colour c is obtainable". If a box β appears in an episode of bridge BoxWorld (this includes loose in the sentence "There was a cat and he liked to sleep"). These representations of relations take the form of query and key vectors governing the passing of messages between entities; messages update entity representations over several rounds of computation until the final representations reflect not just the meaning of words but also their context in a sentence. There is some evidence that the geometry of these final representations serve to organise word representations in a syntax tree, which could be seen as the appropriate analogue to two-dimensional space in the context of language . The Transformer may therefore be viewed as an inductive bias for learning structural representations which are graphs, with entities as vertices and relations as edges. While a graph is a discrete mathematical object, there is a naturally associated topological space which is obtained by gluing 1-simplices (copies of the unit interval) indexed by edges along 0-simplices (points) indexed by vertices. There is a general mathematical notion of a simplicial set which is a discrete structure containing a set of n-simplices for all n ≥ 0 together with an encoding of the incidence relations between these simplices. Associated to each simplicial set is a topological space, obtained by gluing together vertices, edges, triangles (2-simplices), tetrahedrons (3-simplices), and so on, according to the instructions contained in the simplicial set. Following the aforementioned works in neuroscience (; ; ; ; ;) and their emphasis on spatial structure, it is natural to ask if a simplicial inductive bias for learning structural representations can facilitate abstract reasoning. This question partly motivated the developments in this paper.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkecJ6VFvr
We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.
We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data. One of the central questions in science is forecasting: given the past history, how well can we predict the future? In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics. Examples of such systems abound in science and engineering, from biological neural network activity, fluid turbulence, to climate and traffic systems (see FIG0). Since current forecasting systems are unable to faithfully represent the higher-order dynamics, they have limited ability for accurate long-term forecasting. Therefore, a key challenge is accurately modeling nonlinear dynamics and obtaining stable long-term predictions, given a dataset of realizations of the dynamics. Here, the forecasting problem can be stated as follows: how can we efficiently learn a model that, given only few initial states, can reliably predict a sequence of future states over a long horizon of T time-steps? Common approaches to forecasting involve linear time series models such as auto-regressive moving average (ARMA), state space models such as hidden Markov model (HMM), and deep neural networks. We refer readers to a survey on time series forecasting by BID2 and the references therein. A recurrent neural network (RNN), as well as its memory-based extensions such as the LSTM, is a class of models that have achieved good performance on sequence prediction tasks from demand forecasting BID5 to speech recognition BID15 and video analysis BID9. Although these methods can be effective for short-term, smooth dynamics, neither analytic nor data-driven learning methods tend to generalize well to capturing long-term nonlinear dynamics and predicting them over longer time horizons. To address this issue, we propose a novel family of tensor-train recurrent neural networks that can learn stable long-term forecasting. These models have two key features: they 1) explicitly model the higher-order dynamics, by using a longer history of previous hidden states and high-order state interactions with multiplicative memory units; and 2) they are scalable by using tensor trains, a structured low-rank tensor decomposition that greatly reduces the number of model parameters, while mostly preserving the correlation structure of the full-rank model. In this work, we analyze Tensor-Train RNNs theoretically, and also experimentally validate them over a wide range of forecasting domains. Our contributions can be summarized as follows:• We describe how TT-RNNs encode higher-order non-Markovian dynamics and high-order state interactions. To address the memory issue, we propose a tensor-train (TT) decomposition that makes learning tractable and fast.• We provide theoretical guarantees for the representation power of TT-RNNs for nonlinear dynamics, and obtain the connection between the target dynamics and TT-RNN approximation. In contrast, no such theoretical are known for standard recurrent networks.• We validate TT-RNNs on simulated data and two real-world environments with nonlinear dynamics (climate and traffic). Here, we show that TT-RNNs can forecast more accurately for significantly longer time horizons compared to standard RNNs and LSTMs. Forecasting Nonlinear Dynamics Our goal is to learn an efficient model f for sequential multivariate forecasting in environments with nonlinear dynamics. Such systems are governed by dynamics that describe how a system state x t ∈ R d evolves using a set of nonlinear differential equations: DISPLAYFORM0 where ξ i can be an arbitrary (smooth) function of the state x t and its derivatives. Continous time dynamics are usually described by differential equations while difference equations are employed for discrete time. In continuous time, a classic example is the first-order Lorenz attractor, whose realizations showcase the "butterfly-effect", a characteristic set of double-spiral orbits. In discretetime, a non-trivial example is the 1-dimensional Genz dynamics, whose difference equation is: DISPLAYFORM1 where x t denotes the system state at time t and c, w are the parameters. Due to the nonlinear nature of the dynamics, such systems exhibit higher-order correlations, long-term dependencies and sensitivity to error propagation, and thus form a challenging setting for learning. Given a sequence of initial states x 0... x t, the forecasting problem aims to learn a model f DISPLAYFORM2 that outputs a sequence of future states x t+1... x T. Hence, accurately approximating the dynamics ξ is critical to learning a good forecasting model f and accurately predicting for long time horizons. First-order Markov Models In deep learning, common approaches for modeling dynamics usually employ first-order hidden-state models, such as recurrent neural networks (RNNs). An RNN with a single RNN cell recursively computes the output y t from a hidden state h t using: DISPLAYFORM3 where f is the state transition function, g is the output function and θ are model parameters. An RNN only considers the most recent hidden state in its state transition function. A common parametrization scheme for is a nonlinear activation function applied to a linear map of x t and h t−1 as: LSTMs BID8 and GRUs BID3. For instance, LSTM cells use a memory-state, which mitigate the "exploding gradient" problem and allow RNNs to propagate information over longer time horizons. Although RNNs are very expressive, they compute h t only using the previous state h t−1 and input x t. Such models do not explicitly model higher-order dynamics and only implicitly model long-term dependencies between all historical states h 0... h t, which limits their forecasting effectiveness in environments with nonlinear dynamics. DISPLAYFORM4 To effectively learn nonlinear dynamics, we propose Tensor-Train RNNs, or TT-RNNs, a class of higher-order models that can be viewed as a higher-order generalization of RNNs. We developed TT-RNNs with two goals in mind: explicitly modeling 1) L-order Markov processes with L steps of temporal memory and 2) polynomial interactions between the hidden states h · and x t.First, we consider longer "history": we keep length L historic states: DISPLAYFORM0 where f is an activation function. In principle, early work BID7 has shown that with a large enough hidden state size, such recurrent structures are capable of approximating any dynamics. Second, to learn the nonlinear dynamics ξ efficiently, we also use higher-order moments to approximate the state transition function. We construct a higher-order transition tensor by modeling a degree P polynomial interaction between hidden states. Hence, the TT-RNN with standard RNN cell is defined by: DISPLAYFORM1 where α index the hidden dimension, i · index historic hidden states and P is the polynomial degree. Here, we defined the L-lag hidden state as: DISPLAYFORM2 We included the bias unit 1 to model all possible polynomial expansions up to order P in a compact form. The TT-RNN with LSTM cell, or "TLSTM", is defined analogously as: DISPLAYFORM3 where • denotes the Hadamard product. Note that the bias units are again included. TT-RNN serves as a module for sequence-to-sequence (Seq2Seq) framework BID18, which consists of an encoder-decoder pair (see FIG1). We use tensor-train recurrent cells both the encoder and decoder. The encoder receives the initial states and the decoder predicts x t+1,..., x T. For each timestep t, the decoder uses its previous prediction y t as an input. Unfortunately, due to the "curse of dimensionality", the number of parameters in W α with hidden size H grows exponentially as O(HL P), which makes the high-order model prohibitively large to train. To overcome this difficulty, we utilize tensor networks to approximate the weight tensor. Such networks encode a structural decomposition of tensors into low-dimensional components and have been shown to provide the most general approximation to smooth tensors BID11. The most commonly used tensor networks are linear tensor networks (LTN), also known as tensor-trains in numerical analysis or matrix-product states in quantum physics BID12.A tensor train model decomposes a P -dimensional tensor W into a network of sparsely connected low-dimensional tensors DISPLAYFORM0 DISPLAYFORM1 as depicted in Figure. When r 0 = r P = 1 the {r d} are called the tensor-train rank. With tensortrain, we can reduce the number of parameters of TT-RNN from (HL + 1) P to (HL + 1)R 2 P, with R = max d r d as the upper bound on the tensor-train rank. Thus, a major benefit of tensor-train is that they do not suffer from the curse of dimensionality, which is in sharp contrast to many classical tensor decompositions, such as the Tucker decomposition. A significant benefit of using tensor-trains is that we can theoretically characterize the representation power of tensor-train neural networks for approximating high-dimensional functions. We do so by analyzing a class of functions that satisfies some regularity condition. For such functions, tensor-train decompositions preserve weak differentiability and yield a compact representation. We combine this property with neural network estimation theory to bound the approximation error for TT-RNN with one hidden layer in terms of: 1) the regularity of the target function f, 2) the dimension of the input space, 3) the tensor train rank and 4) the order of the tensor. In the context of TT-RNN, the target function f (x), with x = s ⊗... ⊗ s, describes the state transitions of the system dynamics, as in. Let us assume that f (x) is a Sobolev function: f ∈ H k µ, defined on the input space I = I 1 × I 2 × · · · I d, where each I i is a set of vectors. The space H k µ is defined as the functions that have bounded derivatives up to some order k and are L µ -integrable: DISPLAYFORM0 where D (i) f is the i-th weak derivative of f and µ ≥ 0. 1 Any Sobolev function admits a Schmidt decomposition: DISPLAYFORM1, where {λ} are the eigenvalues and {γ}, {φ} are the associated eigenfunctions. Hence, we can decompose the target function f ∈ H k µ as: DISPLAYFORM2 where DISPLAYFORM3 We can truncate to a low dimensional subspace (r < ∞), and obtain the functional tensor-train (FTT) approximation of the target function f: DISPLAYFORM4 In practice, TT-RNN implements a polynomial expansion of the state s as in, using powers [s, s ⊗2, · · ·, s ⊗p] to approximate f T T, where p is the degree of the polynomial. We can then bound the approximation error using TT-RNN, viewed as a one-layer hidden neural network: 1 A weak derivative generalizes the derivative concept for (non)-differentiable functions and is implicitly defined as: DISPLAYFORM5 DISPLAYFORM6 is the size of the state space, r is the tensor-train rank and p is the degree of high-order polynomials i.e., the order of tensor. For the full proof, see the Appendix. From this theorem we see: 1) if the target f becomes smoother, it is easier to approximate and 2) polynomial interactions are more efficient than linear ones in the large rank region: if the polynomial order increases, we require fewer hidden units n. This applies to the full family of TT-RNNs, including those using vanilla RNN or LSTM as the recurrent cell, as long as we are given a state transitions (x t, s t) → s t+1 (e.g. the state transition function learned by the encoder). We validated the accuracy and efficiency of TT-RNN on one synthetic and two real-world datasets, as described below; Detailed preprocessing and data statistics are deferred to the Appendix. Genz dynamics The Genz "product peak" (see FIG3 a) is one of the Genz functions BID6, which are often used as a basis for high-dimensional function approximation. In particular, BID1 used them to analyze tensor-train decompositions. We generated 10, 000 samples of length 100 using with w = 0.5, c = 1.0 and random initial points. Traffic The traffic data (see FIG3 b) of Los Angeles County highway network is collected from California department of transportation http://pems.dot.ca.gov/. The prediction task is to predict the speed readings for 15 locations across LA, aggregated every 5 minutes. After upsampling and processing the data for missing values, we obtained 8, 784 sequences of length 288.Climate The climate data (see FIG3 c) is collected from the U.S. Historical Climatology Network (USHCN) (http://cdiac.ornl.gov/ftp/ushcn_daily/). The prediction task is to predict the daily maximum temperature for 15 stations. The data spans approximately 124 years. After preprocessing, we obtained 6, 954 sequences of length 366. Experimental Setup To validate that TT-RNNs effectively perform long-term forecasting task in, we experiment with a seq2seq architecture with TT-RNN using LSTM as recurrent cells (TLSTM). For all experiments, we used an initial sequence of length t 0 as input and varied the forecasting horizon T. We trained all models using stochastic gradient descent on the length-T sequence regression loss L(y,ŷ) = T t=1 ||ŷ t − y t || 2 2, where y t = x t+1,ŷ t are the ground truth and model prediction respectively. For more details on training and hyperparameters, see the Appendix. We compared TT-RNN against 2 set of natural baselines: 1st-order RNN (vanilla RNN, LSTM), and matrix RNNs (vanilla MRNN, MLSTM), which use matrix products of multiple hidden states without factorization BID14 ). We observed that TT-RNN with RNN cells outperforms vanilla RNN and MRNN, but using LSTM cells performs best in all experiments. We also evaluated the classic ARIMA time series model and observed that it performs ∼ 5% worse than LSTM.Long-term Accuracy For traffic, we forecast up to 18 hours ahead with 5 hours as initial inputs. For climate, we forecast up to 300 days ahead given 60 days of initial observations. For Genz dynamics, we forecast for 80 steps given 5 initial steps. All are averages over 3 runs. We now present the long-term forecasting accuracy of TLSTM in nonlinear systems. FIG4 shows the test prediction error (in RMSE) for varying forecasting horizons for different datasets. We can see that TLSTM notably outperforms all baselines on all datasets in this setting. In particular, TLSTM is more robust to long-term error propagation. We observe two salient benefits of using TT-RNNs over the unfactorized models. First, MRNN and MLSTM can suffer from overfitting as the number of weights increases. Second, on traffic, unfactorized models also show considerable instability in their long-term predictions. These suggest that tensor-train neural networks learn more stable representations that generalize better for long-term horizons. To get intuition for the learned models, we visualize the best performing TLSTM and baselines in FIG5 for the Genz function "corner-peak" and the statetransition function. We can see that TLSTM can almost perfectly recover the original function, while LSTM and MLSTM only correctly predict the mean. These baselines cannot capture the dynamics fully, often predicting an incorrect range and phase for the dynamics. In FIG6 we show predictions for the real world traffic and climate dataset. We can see that the TLSTM corresponds significantly better with ground truth in long-term forecasting. As the ground truth time series is highly chaotic and noisy, LSTM often deviates from the general trend. While both MLSTM and TLSTM can correctly learn the trend, TLSTM captures more detailed curvatures due to the inherent high-order structure. Speed Performance Trade-off We now investigate potential trade-offs between accuracy and computation. FIG7 displays the validation loss with respect to the number of steps, for the best performing models on long-term forecasting. We see that TT-RNNs converge significantly faster than other models, and achieve lower validation-loss. This suggests that TT-RNN has a more efficient representation of the nonlinear dynamics, and can learn much faster as a . Hyper-parameter Analysis The TLSTM model is equipped with a set of hyper-parameters, such as tensor-train rank and the number of lags. We perform a random grid search over these hyperparameters and showcase the in Table 1. In the top row, we report the prediction RMSE for the largest forecasting horizon w.r.t tensor ranks for all the datasets with lag 3. When the rank is too low, the model does not have enough capacity to capture non-linear dynamics. when the rank is too high, the model starts to overfit. In the bottom row, we report the effect of changing lags (degree of orders in Markovian dynamics). For each setting, the best r is determined by cross-validation. For different forecasting horizon, the best lag value also varies. We have also evaluated TT-RNN on long-term forecasting for chaotic dynamics, such as the Lorenz dynamics (see FIG8). Such dynamics are highly sensitive to input perturbations: two close points can move exponentially far apart under the dynamics. This makes long-term forecasting highly challenging, as small errors can lead to catastrophic longterm errors. FIG8 shows that TT-RNN can predict up to T = 40 steps into the future, but diverges quickly beyond that. We have found no state-of-the-art prediction model is stable in this setting. Classic work in time series forecasting has studied auto-regressive models, such as the ARMA or ARIMA model BID2, which model a process x(t) linearly, and so do not capture nonlinear dynamics. Our method contrasts with this by explicitly modeling higher-order dependencies. Using neural networks to model time series has a long history. More recently, they have been applied to room temperature prediction, weather forecasting, traffic prediction and other domains. We refer to BID13 for a detailed overview of the relevant literature. From a modeling perspective, BID7 ) considers a high-order RNN to simulate a deterministic finite state machine and recognize regular grammars. This work considers a second order mapping from inputs x(t) and hidden states h(t) to the next state. However, this model only considers the most recent state and is limited to two-way interactions. BID17 proposes multiplicative RNN that allow each hidden state to specify a different factorized hidden-to-hidden weight matrix. A similar approach also appears in BID14, but without the factorization. Our method can be seen as an efficient generalization of these works. Moreover, hierarchical RNNs have been used to model sequential data at multiple resolutions, e.g. to learn both short-term and long-term human behavior BID20.Tensor methods have tight connections with neural networks. For example, BID4 shows convolutional neural networks have equivalence to hierarchical tensor factorizations. BID10 BID19 employs tensor-train to compress large neural networks and reduce the number of weights. BID19 forms tensors from reshaping inputs and decomposes the input-output weights. Our model forms tensors from high-order hidden states and decomposes the hidden-output weights. BID16 propose to parameterizes the supervised learning models with matrix-product states for image classification. This work however, to the best of our knowledge, is the first work to consider tensor networks in RNNS for sequential prediction tasks for learning in environments with nonlinear dynamics. In this work, we considered forecasting under nonlinear dynamics. We propose a novel class of RNNs -TT-RNN. We provide approximation guarantees for TT-RNN and characterize its representation power. We demonstrate the benefits of TT-RNN to forecast accurately for significantly longer time horizon in both synthetic and real-world multivariate time series data. As we observed, chaotic dynamics still present a significant challenge to any sequential prediction model. Hence, it would be interesting to study how to learn robust models for chaotic dynamics. In other sequential prediction settings, such as natural language processing, there does not (or is not known to) exist a succinct analytical description of the data-generating process. It would be interesting to further investigate the effectiveness of TT-RNNs in such domains as well. We provide theoretical guarantees for the proposed TT-RNN model by analyzing a class of functions that satisfy some regularity condition. For such functions, tensor-train decompositions preserve weak differentiability and yield a compact representation. We combine this property with neural network estimation theory to bound the approximation error for TT-RNN with one hidden layer, in terms of: 1) the regularity of the target function f, 2) the dimension of the input space, and 3) the tensor train rank. In the context of TT-RNN, the target function f (x) with x = s ⊗... ⊗ s, is the system dynamics that describes state transitions, as in. Let us assume that f (x) is a Sobolev function: f ∈ H k µ, defined on the input space I = I 1 × I 2 × · · · I d, where each I i is a set of vectors. The space H k µ is defined as the set of functions that have bounded derivatives up to some order k and are L µ -integrable: DISPLAYFORM0 where D (i) f is the i-th weak derivative of f and µ ≥ 0. Any Sobolev function admits a Schmidt decomposition: DISPLAYFORM0, where {λ} are the eigenvalues and {γ}, {φ} are the associated eigenfunctions. Hence, we can decompose the target function f ∈ H k µ as: DISPLAYFORM1 where DISPLAYFORM2 We can truncate Eqn 13 to a low dimensional subspace (r < ∞), and obtain the functional tensor-train (FTT) approximation of the target function f: DISPLAYFORM3.FTT approximation in Eqn 13 projects the target function to a subspace with finite basis. And the approximation error can be bounded using the following Lemma: Lemma 7.1 (FTT Approximation BID1). Let f ∈ H k µ be a Hölder continuous function, defined on a bounded domain I = I 1 × · · · × I d ⊂ R d with exponent α > 1/2, the FTT approximation error can be upper bounded as DISPLAYFORM4 for r ≥ 1 and DISPLAYFORM5 Lemma 7.1 relates the approximation error to the dimension d, tensor-train rank r,and the regularity of the target function k. In practice, TT-RNN implements a polynomial expansion of the input states s, using powers [s, s ⊗2, · · ·, s ⊗p] to approximate f T T, where p is the degree of the polynomial. We can further use the classic spectral approximation theory to connect the TT-RNN structure with the degree of the polynomial, i.e., the order of the tensor. Let DISPLAYFORM6 Given a function f and its polynomial expansion P T T, the approximation error is therefore bounded by: 2 A weak derivative generalizes the derivative concept for (non)-differentiable functions and is implicitly defined as: DISPLAYFORM7 Where p is the order of tensor and r is the tensor-train rank. As the rank of the tensor-train and the polynomial order increase, the required size of the hidden units become smaller, up to a constant that depends on the regularity of the underlying dynamics f. We trained all models using the RMS-prop optimizer and employed a learning rate decay of 0.8 schedule. We performed an exhaustive search over the hyper-parameters for validation. TAB3 reports the hyper-parameter search range used in this work. For all datasets, we used a 80% − 10% − 10% train-validation-test split and train for a maximum of 1e 4 steps. We compute the moving average of the validation loss and use it as an early stopping criteria. We also did not employ scheduled sampling, as we found training became highly unstable under a range of annealing schedules. Genz Genz functions are often used as basis for evaluating high-dimensional function approximation. In particular, they have been used to analyze tensor-train decompositions BID1. There are in total 7 different Genz functions. g 1 (x) = cos(2πw + cx), DISPLAYFORM0 2 π|x−w| DISPLAYFORM1 For each function, we generated a dataset with 10, 000 samples using FORMULA1 with w = 0.5 and c = 1.0 and random initial points draw from a range of [−0.1, 0.1].Traffic We use the traffic data of Los Angeles County highway network collected from California department of transportation http://pems.dot.ca.gov/. The dataset consists of 4 month speed readings aggregated every 5 minutes. Due to large number of missing values (∼ 30%) in the raw data, we impute the missing values using the average values of non-missing entries from other sensors at the same time. In total, after processing, the dataset covers 35 136, time-series. We treat each sequence as daily traffic of 288 time stamps. We up-sample the dataset every 20 minutes, which in a dataset of 8 784 sequences of daily measurements. We select 15 sensors as a joint forecasting tasks. Climate We use the daily maximum temperature data from the U.S. Historical Climatology Network (USHCN) daily (http://cdiac.ornl.gov/ftp/ushcn_daily/) contains daily measurements for 5 climate variables for approximately 124 years. The records were collected across more than 1 200 locations and span over 45 384 days. We analyze the area in California which contains 54 stations. We removed the first 10 years of day, most of which has no observations. We treat the temperature reading per year as one sequence and impute the missing observations using other non-missing entries from other stations across years. We augment the datasets by rotating the sequence every 7 days, which in a data set of 5 928 sequences. We also perform a DickeyFuller test in order to test the null hypothesis of whether a unit root is present in an autoregressive model. The test statistics of the traffic and climate data is shown in TAB5, which demonstrate the non-stationarity of the time series. Genz functions are basis functions for multi-dimensional FIG0 visualizes different Genz functions, realizations of dynamics and predictions from TLSTM and baselines. We can see for "oscillatory", "product peak" and "Gaussian ", TLSTM can better capture the complex dynamics, leading to more accurate predictions. Chaotic dynamics such as Lorenz attractor is notoriously different to lean in non-linear dynamics. In such systems, the dynamics are highly sensitive to perturbations in the input state: two close points can move exponentially far apart under the dynamics. We also evaluated tensor-train neural networks on long-term forecasting for Lorenz attractor and report the as follows: Lorenz The Lorenz attractor system describes a two-dimensional flow of fluids (see FIG8): dx dt = σ(y − x), dy dt = x(ρ − z) − y, dz dt = xy − βz, σ = 10, ρ = 28, β = 2.667.This system has chaotic solutions (for certain parameter values) that revolve around the so-called Lorenz attractor. We simulated 10 000 trajectories with the discretized time interval length 0.01. We sample from each trajectory every 10 units in Euclidean distance. The dynamics is generated using σ = 10 ρ = 28, β = 2.667. The initial condition of each trajectory is sampled uniformly random from the interval of [−0.1, 0.1]. FIG0 shows 45 steps ahead predictions for all models. HORNN is the full tensor TT-RNN using vanilla RNN unit without the tensor-train decomposition. We can see all the tensor models perform better than vanilla RNN or MRNN. TT-RNN shows slight improvement at the beginning state. TT-RNN shows more consistent, but imperfect, predictions, whereas the baselines are highly unstable and gives noisy predictions.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJJ0w--0W
Accurate forecasting over very long time horizons using tensor-train RNNs
Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods. To analyze real-world data, machine learning relies on models that can extract useful patterns. Deep Neural Networks (DNNs) are a popular choice for this purpose because they can learn flexible representations. Another popular choice are probabilistic graphical models (PGMs) which can find interpretable structures in the data. Recent work on combining these two types of models hopes to exploit their complimentary strengths and provide powerful models that are also easy to interpret BID10 BID14 BID0 BID3.To apply such hybrid models to real-world problems, we need efficient algorithms that can extract useful structure from the data. However, the two fields of deep learning and PGMs traditionally use different types of algorithms. For deep learning, stochastic-gradient methods are the most popular choice, e.g., those based on back-propagation. These algorithms are not only widely applicable, but can also employ amortized inference to enable fast inference at test time BID17 BID12. On the other hand, most popular algorithms for PGMs exploit the model's graphical conjugacy structure to gain computational efficiency, e.g., variational message passing (VMP) BID18, expectation propagation BID16, Kalman filtering BID4 BID5, and more recently natural-gradient variational inference BID9 and stochastic variational inference BID8. In short, the two fields of deep learning and probabilistic modelling employ fundamentally different inferential strategies and a natural question is, whether we can design algorithms that combine their respective strengths. There have been several attempts to design such methods in the recent years, e.g., BID14; BID3; BID0; BID10; BID2. Our work in this paper is inspired by the previous work of BID10 that aims to combine message-passing, natural-gradient, and amortized inference. Our proposed method in this paper simplifies and generalizes the method of BID10.To do so, we propose Structured Inference Networks (SIN) that incorporate the PGM structure in the standard inference networks used in variational auto-encoders (VAE) BID12 BID17. We derive conditions under which such inference networks can enable fast amortized inference similar to VAE. By using a recent VMP method of BID11, we The generative models are just like the decoder in VAE but they employ a structured prior, e.g., Fig. (a) has a mixture-model prior while Fig. (b) has a dynamical system prior. SINs, just like the encoder in VAE, mimic the structure of the generative model by using parameters φ. One main difference is that in SIN the arrows between y n and x n are reversed compared to the model, while rest of the arrows have the same direction.derive a variational message-passing algorithm whose messages automatically reduce to stochasticgradients for the deep components of the model, while perform natural-gradient updates for the PGM part. Overall, our algorithm enables Structured, Amortized, and Natural-gradient (SAN) updates and therefore we call our algorithm the SAN algorithm. We show that our algorithm give comparable performance to the method of BID10 while simplifying and generalizing it. The code to reproduce our is available at https://github.com/emtiyaz/vmp-for-svae/. We consider the modelling of data vectors y n by using local latent vectors x n. Following previous works BID10 BID0 BID14, we model the output y n given x n using a neural network with parameters θ NN, and capture the correlations among data vectors y:= {y 1, y 2, . . ., y N} using a probabilistic graphical model (PGM) over the latent vectors x:= {x 1, x 2, . . ., x N}. Specifically, we use the following joint distribution: DISPLAYFORM0 where θ NN and θ PGM are parameters of a DNN and PGM respectively, and θ:= {θ NN, θ PGM}.This combination of probabilistic graphical model and neural network is referred to as structured variational auto-encoder (SVAE) by BID10. SVAE employs a structured prior p(x|θ PGM) to extract useful structure from the data. SVAE therefore differs from VAE BID12 where the prior distribution over x is simply a multivariate Gaussian distribution p(x) = N (x|0, I) with no special structure. To illustrate this difference, we now give an example. Example (Mixture-Model Prior): Suppose we wish to group the outputs y n into K distinct clusters. For such a task, the standard Gaussian prior used in VAE is not a useful prior. We could instead use a mixture-model prior over x n, as suggested by BID10, DISPLAYFORM1 where z n ∈ {1, 2, . . ., K} is the mixture indicator for the n'th data example, and π k are mixing proportions that sum to 1 over k. Each mixture component can further be modelled, e.g., by using a Gaussian distribution p(x n |z n = k):= N (x n |µ k, Σ k) giving us the Gaussian Mixture Model (GMM) prior with PGM hyperparameters DISPLAYFORM2. The graphical model of an SVAE with such priors is shown in FIG0. This type of structured-prior is useful for discovering clusters in the data, making them easier to interpret than VAE.Our main goal in this paper is to approximate the posterior distribution p(x, θ|y). Specifically, similar to VAE, we would like to approximate the posterior of x by using an inference network. In VAE, this is done by using a function parameterized by DNN, as shown below: DISPLAYFORM3 where the left hand side is the posterior distribution of x, and the first equality is obtained by using the distribution of the decoder in the Bayes' rule. The right hand side is the distribution of the encoder where q is typically an exponential-family distribution whose natural-parameters are modelled by using a DNN f φ with parameters φ. The same function f φ (·) is used for all n which reduces the number of variational parameters and enables sharing of statistical strengths across n. This leads to both faster training and faster testing BID17.Unfortunately, for SVAE, such inference networks may give inaccurate predictions since they ignore the structure of the PGM prior p(x|θ PGM). For example, suppose y n is a time-series and we model x n using a dynamical system as depicted in FIG0. In this case, the inference network of FORMULA3 is not an accurate approximation since it ignores the time-series structure in x. This might in inaccurate predictions of distant future observations, e.g., prediction for an observation y 10 given the past data {y 1, y 2, y 3} would be inaccurate because the inference network has no path connecting x 10 to x 1, x 2, or x 3. In general, whenever the prior structure is important in obtaining accurate predictions, we might want to incorporate it in the inference network. A solution to this problem is to use an inference network with the same structure as the model but to replace all its edges by neural networks BID14 BID3 ). This solution is reasonable when the PGM itself is complex, but might be too aggressive when the PGM is a simple model, e.g., when the prior in FIG0 is a linear dynamical system. Using DNNs in such cases would dramatically increase the number of parameters which will lead to a possible deterioration in both speed and performance. BID10 propose a method to incorporate the structure of the PGM part in the inference network. For SVAE with conditionally-conjugate PGM priors, they aim to obtain a mean-field variational inference by optimizing the following standard variational lower bound 1: DISPLAYFORM4 where q(x|λ x) is a minimal exponential-family distribution with natural parameters λ x. To incorporate an inference network, they need to restrict the parameter of q(x|λ x) similar to the VAE encoder shown in, i.e., λ x must be defined using a DNN with parameter φ. For this purpose, they use a two-stage iterative procedure. In the first stage, they obtain λ * x by optimizing a surrogate lower bound where the decoder in is replaced by the VAE encoder of (highlighted in blue), DISPLAYFORM5 The optimal λ * x is a function of θ and φ and they denote it by λ * x (θ, φ). In the second stage, they substitute λ * x into and take a gradient step to optimize L(λ * x (θ, φ), θ) with respect to θ and φ. This is iterated until convergence. The first stage ensures that q(x|λ x) is defined in terms of φ similar to VAE, while the second stage improves the lower bound while maintaining this restriction. The advantage of this formulation is that when the factors q(x n |f φ (y n)) are chosen to be conjugate to p(x|θ PGM), the first stage can be performed efficiently using VMP. However, the overall method might be difficult to implement and tune. This is because the procedure is equivalent to an implicitly-constrained optimization 2 that optimizes with the constraint λ * x (θ, φ) = arg max λxL (λ x, θ, φ). Such constrained problems are typically more difficult to solve than their unconstrained counterparts, especially when the constraints are nonconvex BID6. Theoretically, the convergence of such methods is difficult to guarantee when the constraints are violated. In practice, this makes the implementation difficult because in every iteration the VMP updates need to run long enough to reach close to a local optimum of the surrogate lower bound. Another disadvantage of the method of BID10 is that its efficiency could be ensured only under restrictive assumptions on the PGM prior. For example, the method does not work for PGMs that contain non-conjugate factors because in that case VMP cannot be used to optimize the surrogate lower bound. In addition, the method is not directly applicable when λ x is constrained and when p(x|θ PGM) has additional latent variables (e.g., indicator variables z n in the mixture-model example). In summary, the method of BID10 might be difficult to implement and tune, and also difficult to generalize to cases when PGM is complex. In this paper, we propose an algorithm to simplify and generalize the algorithm of BID10. We propose structured inference networks (SIN) that incorporate the structure of the PGM part in the VAE inference network. Even when the graphical model contains a non-conjugate factor, SIN can preserve some structure of the model. We derive conditions under which SIN can enable efficient amortized inference by using stochastic gradients. We discuss many examples to illustrate the design of SIN for many types of PGM structures. Finally, we derive a VMP algorithm to perform natural-gradient variational inference on the PGM part while retaining the efficiency of the amortized inference on the DNN part. We start with the design of inference networks that incorporate the PGM structure into the inference network of VAE. We propose the following structured inference network (SIN) which consists of two types of factors, DISPLAYFORM0 The DNN factor here is similar to while the PGM factor is an exponential-family distribution which has a similar graph structure as the PGM prior p(x|θ PGM). The role of the DNN term is to enable flexibility while the role of the PGM term is to incorporate the model's PGM structure into the inference network. Both factors have their own parameters. φ NN is the parameter of DNN and φ PGM is the natural parameter of the PGM factor. The parameter set is denoted by φ:= {φ NN, φ PGM}.How should we choose the two factors? As we will show soon that, for fast amortized inference, these factors need to satisfy the following two conditions. The first condition is that the normalizing constant 3 log Z(φ) is easy to evaluate and differentiate. The second condition is that we can draw samples from SIN, i.e., x * (φ) ∼ q(x|y, φ) where we have denoted the sample by x * (φ) to show its dependence on φ. An additional desirable, although not necessary, feature is to be able to compute the gradient of x * (φ) by using the reparameterization trick. Now, we will show that given these two conditions we can easily perform amortized inference. We show that when the above two conditions are met, a stochastic gradient of the lower bound can be computed in a similar way as in VAE. For now, we assume that θ is a deterministic variable (we will relax this in the next section). The variational lower bound in this case can be written as follows: DISPLAYFORM1 The first term above is identical to the lower bound of the standard VAE, while the rest of the terms are different (shown in blue). The second term differs due to the PGM prior in the generative model. In VAE, p(x|θ PGM) is a standard normal, but here it is a structured PGM prior. The last two terms arise due to the PGM term in SIN. If we can compute the gradients of the last three terms and generate samples x * (φ) from SIN, we can perform amortized inference similar to VAE. Fortunately, the second and third terms are usually easy for PGMs, therefore we only require the gradient of Z(φ) to be easy to compute. This confirms the two conditions required for a fast amortized inference. The ing expressions for the stochastic gradients are shown below where we highlight in blue the additional gradient computations required on top of a VAE implementation (we also drop the explicit dependence of x * (φ) over φ for notational simplicity). DISPLAYFORM2 DISPLAYFORM3 The gradients of Z(φ) and x * (φ) might be cheap or costly depending on the type of PGM. For example, for LDS, these require a full inference through the model which costs O(N) computation and is infeasible for large N. However, for GMM, each x n can be independently sampled and therefore computations are independent of N. In general, if the latent variables in PGM are highly correlated (e.g., Gaussian process prior), then Bayesian inference is not computationally efficient and gradients are difficult to compute. In this paper, we do not consider such difficult cases and assume that Z(φ) and x * (φ) can be evaluated and differentiated cheaply. We now give many examples of SIN that meet the two conditions required for a fast amortized inference. When p(x|θ PGM) is a conjugate exponential-family distribution, choosing the two factors is a very easy task. In this case, we can let q(x|φ PGM) = p(x|φ PGM), i.e., the second factor is the same distribution as the PGM prior but with a different set of parameters φ PGM. To illustrate this, we give an example below when the PGM prior is a linear dynamical system. Example (SIN for Linear Dynamical System (LDS)): When y n is a time series, we can model the latent x n using an LDS defined as p(x|θ):= N (x 0 |µ 0, Σ 0) N n=1 N (x n |Ax n−1, Q), where A is the transition matrix, Q is the process-noise covariance, and µ 0 and Σ 0 are the mean and covariance of the initial distribution. Therefore, θ PGM:= {A, Q, µ 0, Σ 0}. In our inference network, we choose q(x|φ PGM) = p(x|φ PGM) as show below, where φ PGM:= {Ā,Q,μ 0,Σ 0} and, since our PGM is a Gaussian, we choose the DNN factor to be a Gaussian as well: DISPLAYFORM4 where m n:= m φNN (y n) and V n:= V φNN (y n) are mean and covariance parameterized by a DNN with parameter φ NN. The generative model and SIN are shown in FIG0, respectively. The above SIN is a conjugate model where the marginal likelihood and distributions can be computed in O(N) using the forward-backward algorithm, a.k.a. Kalman smoother BID1. We can also compute the gradient of Z(φ) as shown in BID13.When the PGM prior has additional latent variables, e.g., the GMM prior has cluster indicators z n, we might want to incorporate their structure in SIN. This is illustrate in the example below. Example (SIN for GMM prior): The prior shown in has an additional set of latent variables z n. To mimic this structure in SIN, we choose the PGM factor as shown below with parameters DISPLAYFORM5, while keeping the DNN part to be a Gaussian distribution similar to the LDS case: DISPLAYFORM6 The model and SIN are shown in FIG0 and 1b, respectively. Fortunately, due to conjugacy of the Gaussian and multinomial distributions, we can marginalize x n to get a closed-form expression for log Z(φ):= n log k N (m n |μ k, V n +Σ k)π k. We can sample from SIN by first sampling from the marginal q(z n = k|y, φ) ∝ N m n |μ k, V n +Σ k π k. Given z n, we can sample x n from the following conditional: DISPLAYFORM7 In all of the above examples, we are able to satisfy the two conditions even when we use the same structure as the model. However, this may not always be possible for all conditionally-conjugate exponential family distributions. However, we can still obtain samples from a tractable structured mean-field approximation using VMP. We illustrate this for the switching state-space model in Appendix A. In such cases, a drawback of our method is that we need to run VMP long enough to get a sample, very similar to the method of BID10. However, our gradients are simpler to compute than theirs. Their method requires gradients of λ * (θ, φ) which depends both on θ and φ (see Proposition 4.2 in BID10). In our case, we require gradient of Z(φ) which is independent of θ and therefore is simpler to implement. An advantage of our method over the method of BID10 is that our method can handle non-conjugate factors in the generative model. When the PGM prior contains some non-conjugate factors, we might replace them by their closest conjugate approximations while making sure that the inference network captures the useful structure present in the posterior distribution. We illustrate this on a Student's t mixture model. To handle outliers in the data, we might want to use the Student's t-mixture component in the mixture model shown in, i.e., we set p(x n |z n = k) = T (x n |µ k, Σ k, γ k) with mean µ k, scale matrix Σ k and degree of freedom γ k. The Student's t-distribution is not conjugate to the multinomial distribution, therefore, if we use it as the PGM factor in SIN, we will not be able to satisfy both conditions easily. Even though our model contains a t-distribution components, we can still use the SIN shown in that uses a GMM factor. We can therefore simplify inference by choosing an inference network which has a simpler form than the original model. In theory, one can do this even when all factors are non-conjugate, however, the approximation error might be quite large in some cases for this approximation to be useful. In our experiments, we tried this for non-linear dynamical system and found that capturing non-linearity was essential for dynamical systems that are extremely non-linear. Previously, we assumed θ PGM to be deterministic. In this section, we relax this condition and assume θ PGM to follow an exponential-family prior p(θ PGM |η PGM) with natural parameter η PGM. We derive a VMP algorithm to perform natural-gradient variational inference for θ PGM. Our algorithm works even when the PGM part contains non-conjugate factors, and it does not affect the efficiency of the amortized inference on the DNN part. We assume the following mean-field approximation: q(x, θ|y):= q(x|y, φ)q(θ PGM |λ PGM) where the first term is equal to SIN introduced in the previous section, and the second term is an exponential-family distribution with natural parameter λ PGM. For θ NN and φ, we will compute point estimates. We build upon the method of BID11 which is a generalization of VMP and stochastic variational inference (SVI). This method enables natural-gradient updates even when PGM contains non-conjugate factors. This method performs natural-gradient variational inference by using a mirror-descent update with the Kullback-Leibler (KL) divergence. To obtain natural-gradients with respect to the natural parameters of q, the mirror-descent needs to be performed in the mean parameter space. We will now derive a VMP algorithm using this method. We start by deriving the variational lower bound. The variational lower bound corresponding to the mean-field approximation can be expressed in terms of L SIN derived in the previous section. Compute q(x|y, φ) for SIN shown in either by using an exact expression or using VMP. DISPLAYFORM0 Sample x * ∼ q(x|y, φ), and compute ∇ φ Z and ∇ φ x *. Update λ PGM using the natural-gradient step given in. Update θ NN and φ using the gradients given in FORMULA8 - FORMULA0 with θ PGM ∼ q(θ PGM |λ PGM). 7: until ConvergenceWe will use a mirror-descent update with the KL divergence for q(θ PGM |λ PGM) because we want natural-gradient updates for it. For the rest of the parameters, we will use the usual Euclidean distance. We denote the mean parameter corresponding to λ PGM by µ PGM. Since q is a minimal exponential family, there is a one-to-one map between the mean and natural parameters, therefore we can reparameterize q such that q(θ PGM |λ PGM) = q(θ PGM |µ PGM). Denoting the values at iteration t with a superscript t and using Eq. 19 in BID11 with these divergences, we get: DISPLAYFORM0 DISPLAYFORM1 where β 1 to β 3 are scalars,, is an inner product, and ∇L t is the gradient at the value in iteration t. As shown by BID11, the maximization in FORMULA0 can be obtained in closed-form: DISPLAYFORM2 When the prior p(θ PGM |η PGM) is conjugate to p(x|θ PGM), the above step is equal to the SVI update of the global variables. The gradient itself is equal to the message received by θ PGM in a VMP algorithm, which is also the natural gradient with respect to λ PGM. When the prior is not conjugate, the gradient can be approximated either by using stochastic gradients or by using the reparameterization trick BID11. Therefore, this update enables natural-gradient update for PGMs that may contain both conjugate and non-conjugate factors. The update of the rest of the parameters can be done by using a stochastic-gradient method. This is because the solution of the update FORMULA0 is equal to a stochastic-gradient descent update (one can verify this by simplify taking the gradient and setting it to zero). We can compute the stochasticgradients by using a Monte Carlo estimate with a sample θ * DISPLAYFORM3 where θ *:= {θ * PGM, θ NN}. As discussed in the previous section, these gradients can be computed similar to VAE-like by using the gradients given in-. Therefore, for the DNN part we can perform amortized inference, and use a natural-gradient update for the PGM part using VMP.The final algorithm is outlined in Algorithm 1. Since our algorithm enables Structured, Amortized, and Natural-gradient (SAN) updates, we call it the SAN algorithm. Our updates conveniently separate the PGM and DNN computations. Step 3-6 operate on the PGM part, for which we can use existing implementation for the PGM.Step 7 operates on the DNN part, for which we can reuse VAE implementation. Our algorithm not only generalizes previous works, but also simplifies the implementation by enabling the reuse of the existing software. The main goal of our experiments is to show that our SAN algorithm gives similar to the method of BID10. For this reason, we apply our algorithm to the two examples considered in BID10, namely the latent GMM and latent LDS (see FIG0). In this section we discuss for latent GMM. An additional for LDS is included in Appendix C. Our show that, similar to the method of BID10 our algorithm can learn complex Even with 70% outliers, SAN-TMM performs better than SAN-GMM with 10% outliers. DISPLAYFORM0 Figure 3: Top row is for the Pinwheel dataset, while the bottom row is for the Auto dataset. Point clouds in the of each plot show the samples generated from the learned generative model, where each mixture component is shown with a different color and the color intensities are proportional to the probability of the mixture component. The points in the foreground show data samples which are colored according to the true labels. We use K = 10 mixture components to train all models. For the Auto dataset, we show only the first two principle components.representations with interpretable structures. The advantage of our method is that it is simpler and more general than the method of BID10.We compare to three baseline methods. The first method is the variational expectation-maximization (EM) algorithm applied to the standard Gaussian mixture model. We refer to this method as'GMM'. This method is a clustering method but does not use a DNN to do so. The second method is the VAE approach of BID12, which we refer to as'VAE'. This method uses a DNN but does not cluster the outputs or latent variables. The third method is the SVAE approach of BID10 applied to latent GMM shown in FIG0. This method uses both a DNN and a mixture model to cluster the latent variables. We refer to this as'SVAE'. We compare these methods to our SAN algorithm applied to latent GMM model. We refer to our method as'SAN'. All methods employ a Normal-Wishart prior over the GMM hyperparameters (see BID1 for details).We use two datasets. The first dataset is the synthetic two-dimensional Pinwheel dataset (N = 5000 and D = 2) used in BID10. The second dataset is the Auto dataset (N = 392 and D = 6, available in the UCI repository) which contains information about cars. The dataset also contains a five-class label which indicates the number of cylinders in a car. We use these labels to validate our . For both datasets we use 70% data for training and the rest for testing. For all methods, we tune the step-sizes, the number of mixture components, and the latent dimensionality on a validation set. We train the GMM baseline using a batch method, and, for VAE and SVAE, we use minibatches of size 64. DNNs in all models consist of two layers with 50 hidden units and an output layer of dimensionality 6 and 2 for the Auto and Pinwheel datasets, respectively. FIG1 and 2b compare the performances during training. In FIG1, we compare to SVAE and GMM, where we see that SAN converges faster than SVAE. As expected, both SVAE and SAN achieve similar performance upon convergence and perform better than GMM. In FIG1, we compare to VAE and GMM, and observe similar trends. The performance of GMM is represented as a constant because it converges after a few iterations already. We found that the implementation provided by BID10 does not perform well on the Auto dataset which is why we have not included it in the comparison. We also compared the test log-likelihoods and imputation error which show very similar trends. We omit these due to space constraints. In the of each plot in Figure 3, we show samples generated from the generative model. In the foreground, we show the data with the true labels. These labels were not used during training. The plots (a)- FORMULA29 show for the Pinwheel dataset, while plots FORMULA29 - FORMULA29 shows for the Auto dataset. For the Auto dataset, each label corresponds to the number of cylinders present in a car. We observe that SAN can learn meaningful clusters of the outputs. On the other hand, VAE does not have any mechanisms to cluster and, even though the generated samples match the data distribution, the are difficult to interpret. Finally, as expected, both SAN and VAE learn flexible patterns while GMM fails to do so. Therefore, SAN enables flexible models that are also easy to interpret. An advantage of our method over the method of BID10 is that our method applies even when PGM contains non-conjugate factors. Now, we discuss a for such a case. We consider the SIN for latent Student's t-mixture model (TMM) discussed in Section 3. The generative model contains the student's t-distribution as a non-conjugate factor, but our SIN replaces it with a Gaussian factor. When the data contains outliers, we expect the SIN for latent TMM to perform better than the SIN for latent GMM. To show this, we add artificial outliers to the Pinwheel dataset using a Gaussian distribution with a large variance. We fix the degree of freedom for the Student's t-distribution to 5. We test on four different levels of noise and report the test MSE averaged over three runs for each level. FIG1 shows a comparison of GMM, SAN on latent GMM, and SAN on latent TMM where we see that, as the noise level is increased, latent TMM's performance degrades slower than the other methods (note that the y-axis is in log-scale). Even with 70% of outliers, the latent TMM still performs better than the latent GMM with only 10% of outliers. This experiment illustrates that a conjugate SIN can be used for inference on a model with a non-conjugate factor. We propose an algorithm to simplify and generalize the algorithm of BID10 for models that contain both deep networks and graphical models. Our proposed VMP algorithm enables structured, amortized, and natural-gradient updates given that the structured inference networks satisfy two conditions. The two conditions derived in this paper generally hold for PGMs that do not force dense correlations in the latent variables x. However, it is not clear how to extend our method to models where this is the case, e.g., Gaussian process models. It is possible to use ideas from sparse Gaussian process models and we will investigate this in the future. An additional issue is that our are limited to small scale data. We found that it is non-trivial to implement a message-passing framework that goes well with the deep learning framework. We are going to pursue this direction in the future and investigate good platforms to integrate the capabilities of these two different flavors of algorithms. In SLDS, we introduce discrete variable z n ∈ {1, 2, . . ., K} that are sampled using a Markov chain: p(z n = i|z n−1 = j) = π ij such that π ij sum to 1 over all i given j. The transition for LDS is defined conditioned on z n: p(x n |x n−1, z n = i, θ PGM):= N (x n |A i x n−1, Q i) where A i and Q i are parameters for the i'th indicator. These two dynamics put together define the SLDS prior p(x, z|θ PGM). We can use the following SIN which uses the SLDS prior as the PGM factor but with parameters φ PGM instead of θ PGM. The expression for q(x, z|y, φ) is shown below: DISPLAYFORM0 Even though the above model is a conditionally-conjugate model, the partition function is not tractable and sampling is also not possible. However, we can use a structured mean-field approximation. First, we can combine the DNN factor with the Gaussian observation of SLDS factor and then use a mean-field approximation q(x, z|y, φ) ≈ q(x|λ x)q(z|λ z), e.g., using the method of BID5. This will give us a structured approximation where the edges between y n and x n and z n and z n−1 are maintained but x n and z n independent of each other. In this section we give detailed derivations for the SIN shown in. We derive the normalizing constant Z(φ) and show how to generate samples from SIN.We start by a simple rearrangement of SIN defined in: DISPLAYFORM0 DISPLAYFORM1 where the first step follows from the definition, the second step follows by taking the sum over k outside, and the third step is obtained by defining each component as a joint distribution over x n and the indicator variable z n.We will express this joint distribution as a multiplication of the marginal of z n and conditional of x n given z n. We will see that this will give us the expression for the normalizing constant, as well as a way to sample from SIN.We can simplify the joint distribution further as shown below. The first step follows from the definition. The second step is obtained by swapping m n and x n in the first term. The third step is obtained by completing the squares and expressing the first term as a distribution over x n (the second and third terms are independent of x n).q(x n, z n = k|y n, φ) ∝ N (x n |m n, V n)N (x n |μ k,Σ k)π k DISPLAYFORM2 where Σ Using the above we get the marginal of z n and conditional of x n given z n: DISPLAYFORM3 q(x n |z n = k, y n, φ):= N (x n | µ n, Σ n)The normalizing constant of the marginal of z n is obtained by simply summing over all k: DISPLAYFORM4 and since q(x n |z n = k, y n, φ) is already a normalized distribution, we can write the final expression for the SIN as follows: DISPLAYFORM5 q(x n |z n = k, y n, φ)q(z n = k|y n, φ)where components are defined in FORMULA1, FORMULA1, and. The normalizing constant is available in closed-form and we can sample z n first and then generate x n. This completes the derivation. In this experiment, we apply our SAN algorithm to the latent LDS discussed in Section 3. For comparison, we compare our method, Structured Variational Auto-Encoder (SVAE) BID10, and LDS on the Dot dataset used in BID10. Our show that our method achieves comparable performance to SVAE. For LDS, we perform batch learning for all model parameters using the EM algorithm. For SVAE and SAN, we perform mini-batch updates for all model parameters. We use the same neutral network architecture as in BID10, which contains two hidden layers with tanh activation function. We repeat our experiments 10 times and measure model performance in terms of the following mean absolute error for τ -steps ahead prediction. The error measures the absolute difference between the ground truth and the generative outputs by averaging across generated . 27) where N is the number of testing time series with T time steps, d is the dimensionality of observation y, and observation y * t+τ,n denotes the ground-truth at time step t + τ. From FIG4, we can observe that our method performs as good as SVAE and outperforms LDS. Our method is slightly robust than SVAE. In FIG5, there are generated images obtained from all methods. From FIG5, we also see that our method performs as good as SAVE and is able to recover the ground-truth observation.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyH9lbZAW
We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model.
Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones. One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix. However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance. In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input. We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks. Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts. Modern neural networks usually contain millions of parameters BID4 BID8, and they are difficult to be deployed on mobile devices with limited computation resources. To solve this problem, model compression techniques are proposed in recent years. Low-rank factorization is a popular way of reducing the matrix size. It has been extensively explored in the literature BID5 BID6 BID3 BID10. Mathematically, a large weight matrix W ∈ R m×n is factorized to two small rank-d matrices U ∈ R m×d, V ∈ R n×d with W = U V T. Since both U and V are dense, no sparsity support is required from specialized hardware. It naturally fits the general-purpose, off-the-shelf CPUs and GPUs. To significantly reduce the model size and computation, the rank d in the low-rank factorization needs to be small. However, a small rank can limit the expressiveness of the model BID9 and lead to worse performance. To understand the limitations, given a n-dim feature vector h, we observe that DISPLAYFORM0, is a linear projection from a high-dimensional space (n dims) to a low-dimensional space (d dims). This can lead to a significant loss of information. The conflict between the rank d and the model expressiveness prevents us from obtaining a both compact and accurate model. To address the dilemma, we propose to increase the expressiveness by learning an adaptive, inputdependent factorization, rather than performing a fixed factorization of a weight matrix. To do so, we use a mixture of multiple low-rank factorizations. The mixing weights are computed based on the input. This creates an adaptive linear projection from a high-dimensional space to a low-dimensional space. Compared to the conventional low-rank factorization, the proposed approach can significantly improve its performance while only introducing a small additional cost. DISPLAYFORM1 where z can be treated as the middle layer. Techniques like pooling can be applied to compute π to make it efficient. We propose to use an unnormalized learned mixture of low-rank factorizations whose mixing weights are computed adaptively based on the input. More specifically, denoting the input by h and the number of mixture components by K, we decompose a large weight matrix by DISPLAYFORM0 where π(·): R n → R K is the function which maps each input to its mixture coefficients. For example, π can be a small neural network. This introduces a small amount of extra parameters and computation. We will later discuss the details of efficient ways to implement the mixture function π. If π k, k = 1,..., K, is chosen to be constant (input independent), it can be absorbed into either DISPLAYFORM1. Thus, the proposed method reduces to the low-rank factorization. This is evidenced by rewriting DISPLAYFORM2. In other words, the conventional low-rank factorization can be considered as a special case of our method. FIG0 depicts the proposed framework. Adaptive mixing weights π(h). The mixing weights can encode important information that we can use to increase the expressiveness of the projected low-dimensional space. Under our framework, the generation of the mixing weights π(h) is flexible. A straight-forward approach is to use a non-linear transformation of the input to the weight matrix. For example, π(h) = σ(P h), where σ is a non-linear transformation, such as sigmoid or hyperbolic tangent function, and P ∈ R K×n is an extra weight matrix. This adds some extra parameters and computation to the model since the linear projection that we construct is R n → R K. To further reduce the parameter and computation in the mixing weights π, we propose the following strategies. Pooling before projection. We do not require the whole input to compute the mixture function π. Instead, we can apply pooling to the input h before projection. For example, a global average pooling can be applied if the input is a 3D tensor (for images); for a 1D vector, we can segment the vector and average each segmentations. By applying pooling, we can both save the computation and better capture the global information. Random projection. To reduce the number of parameters in the linear projection of h, we can use a random matrix P random in place of a fully adjustable P, i.e. π(h) = σ(P random h). Note that we can simply save a seed to recover the random matrix, but it still requires the same amount of memory and computation as the fully adjustable linear projection of h. Increased expressiveness. The adaptive mixing weights introduce a non-linear transformation into the high-to-low-dimensional projection that can be more expressive. Since each W (h) is a data-dependent low-rank matrix, there is no constant linear weight independent to the input (even a full-rank matrix) that can mimic the transformation W (h). Generating the whole weight matrices can be very expensive. Our method can be seen as a swift approach to generate the weights by adaptively adjusting mixing weights for the linear bottleneck. It assigns weights into groups and dynamically controls them at the group level. Recurrent neural networks for language modeling. Recurrent neural networks (RNNs) are widely used in language modeling, machine translation and sequence modeling in general. We adopt the same Long Short Term Memory (LSTM) models and follow the settings from a previous state-ofthe-art model BID11 for language modeling, and use Penn Tree Bank (PTB) as well as Text8 datasets. More specifically, we use the medium sized model introduced in BID11.We test three variants of the proposed model against regular low-rank factorization, each with different ways of computing mixing weights, namely MIX-ALL-PJ: direct linear projection of the input vector h, MIX-POOL-PJ: linear projection after segment-based mean pooling of the input vector h, and MIX-RND-PJ: use a random projection for the input vector h. Among these adaptive projection methods, MIX-ALL-PJ has a large amount of extra parameters, MIX-POOL-PJ has a small amount of extra parameters, and MIX-RND-PJ has no extra parameters. We compute the FLOPs of a single time-step of applying LSTM, and the perplexity associated to different settings. The are shown in FIG1. Firstly, with adaptive mixtures, the low-rank factorization model achieved 40% reduction in FLOPs, and even surpassed the performance of the full matrix baseline by decreasing the perplexity by 1.7 points. Secondly, the use of adaptive mixtures can significantly improve the performance compared with regular, non-adaptive low-rank factorization. Thirdly, using pooling before projection can be a good choice for computing the mixing weights π. It not only reduces the computation and parameter size, but can better capture the global information and achieve better accuracy. CNN for image recognition. We further demonstrate the effectiveness of the proposed approach on compressing CNN models on ImageNet BID1. We chose to use modern compact CNN models as the baseline (which are harder to compress), rather than using the bulky CNN models (which is easier to compress). Specifically, we choose to compress the point-wise convolution in depth-wise separable convolutions BID0, MobileNet BID2 BID7 in particular. TAB0 shows the comparison of different state-of-art compact convolutional models. We observed that compared to the regular low-rank factorization of MobileNet model (a.k.a. MobileNet V2), the proposed method achieves significantly better (2.5% and 2% for two different Low-rank MobileNet settings, respectively), while only adding negligible extra FLOPs (less than 1%).
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1eHgu-Fim
A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact.
Deep learning training accesses vast amounts of data at high velocity, posing challenges for datasets retrieved over commodity networks and storage devices. We introduce a way to dynamically reduce the overhead of fetching and transporting training data with a method we term Progressive Compressed Records (PCRs). PCRs deviate from previous formats by leveraging progressive compression to split each training example into multiple examples of increasingly higher fidelity, without adding to the total data size. Training examples of similar fidelity are grouped together, which reduces both the system overhead and data bandwidth needed to train a model. We show that models can be trained on aggressively compressed representations of the training data and still retain high accuracy, and that PCRs can enable a 2x speedup on average over baseline formats using JPEG compression. Our hold across deep learning architectures for a wide range of datasets: ImageNet, HAM10000, Stanford Cars, and CelebA-HQ. Distributed deep learning exploits parallelism to reduce training time, and consists of three key components: the data pipeline (storage), the forward/backward computation (compute), and the variable synchronization (network). A plethora of work has investigated scaling deep learning from a compute-or network-bound perspective (e.g., ; ; ; ; ; ; ; ; ; . However, little attention has been paid toward scaling the storage layer, where training starts and training data is sourced. Unfortunately, hardware trends point to an increasing divide between compute and networking or storage bandwidth (; ;). For example, the transportation of data for machine learning is a key factor in the design of modern data centers , which are expected to be serviced by slow, yet high capacity, storage media for the foreseeable future (; ;). This, combined with the memory wall-a lack of bandwidth between compute and memory-suggests that, while computation may be sufficient moving forward, the mechanisms for moving data to the compute may not (; ; ;). The storage pipeline is therefore a natural area to seek improvements in overall training times, which manifest from the storage medium, through the network, and into the compute nodes. In this work, we propose a novel on-disk format called Progressive Compressed Records (PCRs) as a way to reduce the bandwidth cost associated with training over massive datasets. Our approach leverages a compression technique that decomposes each data item into deltas, each of which increases data fidelity. PCRs utilize deltas to dynamically compress entire datasets at a fidelity suitable for each application's needs, avoiding duplicating the dataset (potentially many times) at various fidelity levels. Applications control the trade-off between dataset size (and, thus, bandwidth) and fidelity, and a careful layout of deltas ensures that data access is efficient at a storage medium level. As a , we find that for a variety of popular deep learning models and datasets, bandwidth (and therefore training time) can be easily reduced by 2× on average relative to JPEG compression without affecting model accuracy. Overall, we make the following contributions: 1. In experiments with multiple architectures and several large-scale image datasets, we show that neural network training is robust to data compression in terms of test accuracy and training loss; however, the amount of compression that can be tolerated varies across learning tasks. 2. We introduce Progressive Compressed Records (PCRs), a novel on-disk format for training data. PCRs combine progressive compression and careful data placement to enable applications to dynamically choose the fidelity of the dataset they consume, reducing data bandwidth. 3. We demonstrate that by using PCRs, training speed can be improved by 2× on average over standard formats using JPEG compression. This is achieved by selecting a lower data fidelity, which, in turn, reduces the amount of data read without significantly impairing model performance. Two complementary concepts make up the process of storing training data: the layout of the data on the storage medium and the representation of the data. Data layout is important because it can help fully utilize the bandwidth potential of the underlying storage system. Data representation is important because it can reduce the amount of data transferred per data unit (i.e., a bandwidth requirement reduction). An example of data representation within the scope of this work is compression, which increases the computation per bit-a key property to consider as computation increases faster than bandwidth to storage. Compression may lower image quality by introducing artifacts or blur. Figure 1: A hard disk with multiple data formats. Other storage media have the same space, bandwidth, and locality considerations. File-per-image formats have highly random behavior. Record formats encode many records at various data qualities to save bandwidth and have sequential behavior for a given fidelity. PCRs maintain the sequential behavior of record formats at multiple fidelities without space overheads. Record Layouts. Learning from data requires sampling points from a training set, which can cause small, random accesses that are detrimental to the performance of the storage device. Record layouts, such as TensorFlow's TFRecords (TFRecords) or MXNet's ImageRecord (ImageRecord), attempt to alleviate this problem by batching data points together to increase access locality. Batches of training data (i.e., dataset subsets) are then accessed together, amortizing delays in access time across multiple data points. These batches of data are called records. The key to any record layout is the serialization, which is the conversion of data structures into byte streams. Record designs have different performance properties (e.g., space or access time) when written to disk, as shown in Figure 1. Image Compression. Compressed forms are commonly used to represent training data. JPEG is one of the most popular formats for image compression and is used ubiquitously in machine learning (e.g., ; ; ;). Most compression formats (including JPEG) only allow for the compression level, i.e., the trade-off between data size and fidelity, to be set at encoding time, which often in choosing this level independent of the application. This can in over-compression, which may negatively impact application convergence quality, or under-compression, which in excess data size, and thus, slower storage system performance. Worse, deep learning pipelines often involve an application-defined post-processing step (e.g., data augmentation), which may further distort an image and obscure the relationship between image fidelity and model accuracy (; ; ;). While setting encoding-time parameters is unavoidable, the ability to decompress data as it becomes available (i.e., dynamic compression) provides a means to avoid some of the bandwidth expenses of under-compression by simply terminating decompression once sufficient fidelity is reached. In Figure 2, we provide a high-level illustration of the JPEG algorithm, which can be customized to support dynamic compression. First, an image is split into blocks of size 8 × 8. Each block is converted into the frequency domain, such that frequency 0 is the average color of the block, and higher frequencies encode rapid changes in the block. The low frequencies, such as the average value of the block, store the bulk of the perceptually-relevant content in the image (e.g., knowing the block is mostly blue is more important than knowing a white wave is rippling through it). Quantization, which discards information from the block and in compression, thus prioritizes discarding higher frequencies. The ing quantized table is then serialized into a flat form. Since data is rendered on a screen from left to right, top to bottom, it makes sense to encode the data in this manner, which in a sequential format 1. Decoding the ing data is simply a matter of inverting (albeit losslessly) the process that we just described. Figure 2: Top: JPEG carves an image into blocks, which are then converted into frequencies, quantized, and serialized. Progressive compression writes out a subset of important coefficients from each block before re-visiting the block. Bottom: Higher scans have diminishing fidelity returns. Progressive Image Compression. Progressive formats allow data to be read at varying degrees of compression without duplication. With the sequential case, data is ordered by blocks, and thus, partially reading the data in "holes" in the image for unread blocks . Dynamic compression ensures that all blocks get some information (deltas) before revising them (with more deltas). As progressive formats are simply a different traversal of the quantization matrix, with all else being equal, they contain the same information as sequential JPEG (JPEGTran LibJPEG). Progressive JPEG, combined with an additional rearrangement of data, forms the basis of the idea behind PCRs. In Figure 2, non-progressive formats serialize the image matrix in one pass, while progressive formats serialize the matrix in disjoint groups of deltas which are called scans. Scans are ordered by importance (e.g., the first few scans improve fidelity more than subsequent scans). Thus, any references to images generated from scan n will implicitly assume that the image decoder had access to all prior scans (i.e., {scan 1, scan 2, . . ., scan (n − 1)}). The bottom of Figure 2 shows how image fidelity improves from a single scan to utilizing all scans. In this section, we introduce a novel record format for machine learning training called Progressive Compressed Records (PCRs). PCRs are a combination of both layout and data representation. Efficient layouts guarantees that hardware is fully utilized (in terms of bandwidth), while efficient data representations can reduce the total amount of work that is required of the system. To this end, we introduce the concept of scan groups in Section 3.1, which leverage both layout and progressive compression to obtain dynamic compression, allowing both high performance reads while reducing the amount of data read. Using progressive compression, scan groups break images into deltas, which are then rearranged in order to facilitate reduced, yet sequential, data access. In Section 3.2, we discuss how PCRs are implemented, covering both creating PCRs (encoding) and reading them (decoding). The benefits of the PCR implementation boiling down to a bit shuffle are that: 1) PCRs are easy to implement, 2) they are fundamentally lossless, and 3) processing them is fast. As we demonstrate in Section 4, while PCRs can be implemented easily, they manifest in large speedups for a variety of scenarios. Further, PCRs can be generalized beyond images and JPEG. Scan groups are a collection of scans (deltas) of the same fidelity. Scan groups combine layout with progressive compression to allow reading subsets of the compressed data with high hardware efficiency. PCRs make the assumption that the entire training data will be read at the same fidelity. Using this assumption, scan groups rearrange the data such that all deltas of the same fidelity are grouped together. This, in turn, enables groups of deltas to be read together sequentially, which creates dynamicity in the decoding process. Since scans are sorted by importance, and scan groups are a set of scans, the scan groups are also sorted by importance. To paint a clear representation of how scan groups work, we point the reader to Figure 3. PCRs begin with some metadata which is assumed to be needed by all machine learning tasks, such as labels or bounding boxes. In practice, metadata is small in size, and, thus, the space overheads are negligible. The metadata is followed by scan groups, which consist of scans. The scan 1 representation of the shark in Figure 2 will be available in its record once data is read up to offset 1. Likewise, the scan 3 representation will be available once the record is read up to offset 3, and the representation will be more crisp as 3 scans were used per image, rather than 1. Reading up to the end of the record yields the most complete representation of the image. As scan groups consist of groups of the same fidelity, every image contained in a record is available at the same fidelity at the same group offset. Users of PCRs can read data at a certain scan fidelity by simply reading the on-disk byte stream from the start of the PCR (i.e., offset 0) to the byte offset corresponding to the corresponding scan group. Partially reading the records in bandwidth savings without re-encoding the data. There are two fundamental PCR implementation details: the encoding process and the decoding process. The encoding process transforms a set of JPEG files into a directory, which contains 1) a database for PCR metadata and 2) at least one.pcr file. The decoding process, which takes the directory as input and yields a set of JPEG images, efficiently inverts a subset of the encoding. The dataset is split into many PCRs, and, thus, the training process is reading tens to hundreds of.pcr files per epoch. The data loader is where the PCR decoding library interfaces with the inputs provided to deep learning libraries (e.g., TensorFlow , MXNet, PyTorch ). Below, we describe how each of these steps is done. Figure 3: PCRs encode label metadata followed by all scan groups. Accessing the dataset at a lower fidelity requires reading up to a certain address offset. Encoding. Given a set of images, the PCR encoder must break the images into scans, group the scans into scan groups, and sort the scan groups by fidelity. Once the groups are sorted, the PCR encoder can serialize the groups while taking note of their offsets (so that subsets may later be decoded). The metadata (e.g., labels) is prepended to the serialized representation, and the serialized representation is written to disk. We focus on grouping JPEG due to its generality, but PCRs can use any dataset-level progressive format. Images can be decomposed in both space and fidelity; other data modalities (e.g., video) may also have time. Our implementation uses JPEGTRAN (JPEGTran Man Page) to losslessly transform the set of JPEG images into a set of progressive JPEG images. With the default settings, each JPEG is broken up into 10 scans. The encoder scans the binary representation of the progressive JPEG files, searching for the markers that designate the end of a scan group. The encoder thus has access to all 10 offsets within the JPEG files that can be used to determine the boundaries between scan regions. Forming scan groups requires grouping the scan regions with the same fidelity together, which can be done in one pass over the set of images corresponding to that PCR. This grouping must be reversible, as the decoding process will un-group the scans to reconstruct the original images. This grouping can be done with existing serialization libraries. We use Protobuf (Protobuf) to serialize the groups as well as the labels. However, it is key that every group (and the metadata) be serialized as a separate message, as Protobuf can rearrange the contents within a message, and thus can rearrange the ordering of the groups themselves. We finally concatenate the contents of the messages and write them out as one file. As shown in Appendix A.5, any record format conversion can be expensive; PCRs benefit from requiring only a single conversion for multiple tasks. Decoding. To decode a PCR file, one has to first lookup the file's scan group offsets in the database. The offsets provide sufficient information to do a partial read of the file (e.g., instead of reading the entire file, we read only enough bytes to read up to the desired scan group). Decoding the JPEGs requires inverting the PCR scan-group grouping process for the available scan-groups prior to JPEG decode. Since we are missing scan-groups, we terminate the byte stream with an End-of-Image (EOI) JPEG token-this technique allows most JPEG decoders to render the byte stream with only the available subset of scans. The bulk of the inverse conversion is done in 150 lines of C++ code. Loader. We implemented PCR loaders using PyTorch's dataloader as well as DALI 's ExternalSource operator to return batches of images at a configurable fidelity (with the corresponding labels). We find that a pipeline abstraction simplifies loader design, since recordbased datasets can be easily iterated sequentially. In contrast, the PyTorch Dataloader abstrac-tion, which assumes that we can index randomly into an in-memory data structure (e.g., i = RandInt(0, n); (x, y) = data[i];), is harder to use for constantly fetching record formats off disk. Our implementation, while being only several hundred lines of code, obtains image rates that are competitive (e.g., faster/slower depending on number of scans) with the included DALI TFRecord loader, showing that PCRs can be implemented efficiently (i.e., fast enough to rarely bottleneck data loading) with a low amount of engineering effort. This section presents our evaluation of PCRs using a suite of large-scale image datasets. As large images are more taxing to a system's network and storage, our evaluation focuses on datasets with high-resolution images. We describe our experimental setup in Section 4.1. We present our evaluation in Section 4.2, showing that halving data bandwidth per image in comparable accuracy but with half the training time. In Section 4.3, we analyze the intuitive relationship between objective measures of image fidelity and time-to-accuracy. Finally, in Section 4.4, we present that trace the training time speedups to the data loading times themselves. Our evaluation uses the ImageNet ILSVRC , HAM10000 , Stanford Cars , and CelebA-HQ datasets, which are described below. See Appendix A.4 for additional details. • ImageNet-100 ImageNet provides a wide diversity of classes, of which we focus on the first 100 to make training times more tractable. Since classes are roughly ordered by ImageNet categories, this in a fine-grained, i.e., hard to classify, multiclass task. We convert the dataset into PCRs in batches of 1024, which in 126 PCRs. We use the full ImageNet dataset in Appendix A.7. • HAM10000 We split the HAM10000 dataset randomly 80%/20% between train and test. We convert the dataset into PCRs in batches of 64, which in 125 PCRs of similar size as the ones used for ImageNet-100. • Stanford Cars The Stanford Cars dataset is another fine-grained classification dataset, since all images are cars, and there are 196 classes spread over 16k images. We believe this dataset highlights some of the worst-case training scenarios, as it is considerably easier to predict highly compressed variants of unrelated images by exploiting low frequency image statistics (e.g., planes vs. frogs). We explore a coarse-grained version of Cars in Appendix A.6. Cars has 63 PCRs. • CelebAHQ-Smile CelebA-HQ is a high-resolution derivative of the CelebA dataset , which consists of 30k celebrity faces at 1024 2. We use the annotations provided by CelebA to construct a smiling or not smiling dataset. We split the 30k dataset into 80%/20% train/test, and we convert the training set into 93 PCRs. All datasets utilize resizing, crop, and horizontal-flip augmentations, as is standard for ImageNet training. We provide examples of scan groups for these datasets in Appendix A.8. Training Regime. We use pretrained ImageNet weights for HAM10000 and Cars due to the limited amount of training data. We use standard ImageNet training, starting the learning rate at 0.1 (with gradual warmup ) and dropping it on epoch 30 and 60 by 10×. After augmentations, all inputs are of size 224 × 224. The pretrained experiments (HAM10000 and Cars) start at a learning rate of 0.01 to avoid changing the initialization too aggressively. We use fp16 training as it in an additional 10% images per second (see Appendix A.3). We use a ResNet18 and ShuffleNetv2 architecture for our experiments with a batch size of 128 per each worker. We run each experiment at least 3 times to obtain confidence intervals given different random seeds and sources of non-determinism such as multi-threading and I/O., and one node is used as a Ceph metadata server (MDS). The remaining 10 nodes are used as machine learning workers for the training process. This means there is a 2:1 ratio between compute and storage nodes. We use PyTorch (v1.12) with NVIDIA Apex (Apex) (v0.1) and NVIDIA DALI (v0.14.0). We use at least four worker threads to prefetch data in the loader. While we focus on this particular distributed setting, we observe similar time-to-accuracy gains on a single machine with eight GPUs sharing the same disk, and we believe the will generalize to different setups. The time-to-accuracy for ResNet18 training are presented in Figure 4, while those of ShuffleNetv2 are presented in Figure 6. See Appendix A.2 for a tabular view and Appendix A.1 for the corresponding training loss . All scan groups within a dataset were run for the same amount of epochs, so lower scan groups finish earlier. 90 epochs are shown for ImageNet, 150 epochs are shown for HAM10000, 250 epochs are shown for Stanford Cars, and 90 epochs are shown for CelebAHQ-Smile. We sample the test accuracy every 15 epochs for non-ImageNet datasets to reduce interference with training measurements. To avoid measuring implementation differences with other loaders, our evaluation focuses on the differences obtained by reading various amounts of scan groups. Reading all the data (up to scan group 10) is the baseline. First, we note that for all datasets, except for Cars, PCRs provide a 2× boost to time-to-accuracy compared to the baseline. The reason for this speedup is that lower scan groups are smaller. As shown in Figure 5, scan group 5 is roughly half the size of the baseline, and scan group 1 is a fifth of scan group 5 (i.e., a potential 10× bandwidth savings). This trend holds across datasets (see Appendix A.1). As we will discuss in Section 4.4, the space savings manifest in reduced dataloader latencies. Second, we note that there is an inherent trade-off between convergence quality and the speedup attained by using less storage resources. In general, although lower fidelity scan groups allow the system to operate more efficiently, they do so at the expense of model convergence. Scan group 1, the lowest fidelity scan, performs poorly, especially on Cars, where fine-grained details are important. Scan groups limit the maximum achievable accuracy on a task; if learning plateaus prematurely, applications should raise the scan group in a manner similar to dropping the learning rate. Third, the relative rankings of scan groups are relatively stable across models and datasets, which reduces tuning efforts in choosing the appropriate scan group. We further relate these rankings to the fidelity of the scan groups in Section 4.3. Our is that, for most datasets, scan group 5 costs half as much in terms of bandwidth, but reaches the same level of test accuracy as the baseline-thus, it is a good default. This is most apparent for ImageNet and HAM10000, which are challenging enough for small variations in image fidelity to make a commensurate difference in test accuracy. In contrast, Cars is too fine-grained to allow images to be degraded, and CelebAHQ-Smile is too coarse-grained for image degradation to matter. We use MSSIM , a standard measure of image similarity, to compare how various scans approximate the reference image, and we show the in Figure 7. We find that there is a strong connection between MSSIM and the ing final test accuracy, especially when comparing scan groups within a task. Our preliminary tests demonstrate that scan groups that have very similar MSSIM perform very similarly, which is why only groups 1, 2, 5, and the baseline are shown. Due to the way progressive JPEG is coded by default, groups tend to cluster (e.g., 2, 3, and 4 are usually similar, while 5 introduces a difference). We note that MSSIM being poor (e.g., scan group 1 for cars) or MSSIM being close to baseline (e.g., scan group 5 for HAM10000) are good predictors of relative test accuracy within tasks. MSSIM can therefore be used as a diagnostic for choosing scans. The datasets we evaluated show that data loading can slow down the training process. To highlight these slowdowns, and the improvements PCRs achieve by not using all scan groups, we present the loading time of data for the ResNet18 ImageNet-100 run in Figure 8. We obtain similar for the other datasets. The baseline of using all scan group in high periodic loading stalls, where the prefetching queue was drained. Upon blocking, training cannot proceed until the worker threads obtain a full batch of data. Periods of (mostly) no stalls are caused by both threads pre-fetching the data and single records servicing multiple minibatches. Using fewer scan groups reduces the amount of data read, which in lower magnitude stalls. We observe these stalls with both DALI and PyTorch loaders. Training Over Large Datasets. Training with massive amounts of parallelism (thus stressing system bandwidth) while achieving near-linear speedup has been the focus of previous work, and it highlights a practical need for efficient data pipelines at scale. A common objective is training mod-els over ImageNet in a record amount of time (; ; ; ;). This line of work, while demonstrating immense bandwidth needs, typically keeps data in memory, avoiding storage problems altogether. Recently, the high performance computing community has become interested in training models at massive scale (27k GPUs) . Since each GPU matches a disk in bandwidth, the dataset was partitioned among the local memory/storage of the nodes, avoiding the distributed filesystem. Our work attempts to reduce the storage bottleneck altogether, such that anything from a couple disks to a distributed file system could service many GPUs. A separate line of work shows that I/O is a significant bottleneck for certain tasks and proposes optimizing I/O via a set of deep-learning specific optimization to LMDB . In contrast, our focus is more on data representation, which is independent of the internals of the storage system. Production systems such as TFX have used custom Protobuf parsers to get 2-5× speedups for simple (e.g., linear) models; these techniques are complementary to ours and reduce loader computational overheads. Dataset Reduction Techniques. The availability of larger datasets has spawned interest in learning algorithms that guaranteed both "good" model accuracy and lower computational complexity. Data reduction techniques, such as sketching, coresets, clustering, and sampling, have been used to reduce the size of a training set (; ; ; ; ; ;). A different approach is to use the unaltered training set, but reduce the size of the active training set to reduce bandwidth requirements . In contrast, we modify the data representation and layout to be more efficient across a wide variety of models. Compression. Finally, the reduction of data size via compression methods is ubiquitous across computer systems. To avoid costly model transmission/storage, prior work compressed neural network models (b; a; ; ; ;). Similarly, dataset distillation compresses a model's parameters into a few training examples. Our work attempts to compress data for training, and not the network itself. Prior work has looked into optimizing training systems by compressing neural network training network traffic (; ; ; ; . This trend is not specific to machine learning; prior work in databases, computer memories, and the web used compression to reduce system bandwidth requirements (; ;). Our work focuses on bandwidth for ML data pipelines by utilizing the compression robustness found in most models. Other work modifies models to be able to directly train on compressed representations for the purpose of avoiding decoding or reducing model complexity (; ; ;). Our work differs in motivation, as we do not focus on model computation or make modifications to the models. Previous work has investigated how image degradation (e.g., JPEG artifacts) affect inference (; ; ;); in contrast, our work focuses on the effects of compression on training. To continue making advances in machine learning, researchers will need access to larger and larger datasets, which will eventually spill into (potentially distributed) storage systems. Storage and networking bandwidth, which are precious resources, can be better utilized with efficient compression formats. We introduce a novel record format, Progressive Compressed Records (PCRs), that trades off data fidelity with storage and network demands, allowing the same model to be trained with 2× less storage bandwidth while retaining model accuracy. PCRs use progressive compression to split training examples into multiple examples of increasingly higher fidelity without the overheads of naive approaches. PCRs avoid duplicating space, are easy to implement, and can be applied to a broad range of tasks dynamically. While we apply our format in this work specifically to images with JPEG compression, PCRs are general enough to handle various data modalities or additional compression techniques; future work will include exploring these directions in fields outside of visual classification, such as audio generation or video segmentation. Below, we provide additional experiment plots that were omitted in the main text. Figure 9 and Figure 10 contain the loss over time for the ResNet-18 and ShuffleNetv2 experiments shown in Section 4. Figure 11 extends Figure 5 to show the scan sizes for all datasets. It is worth noting that Top-5 accuracies mirror the Top-1 accuracies trends for ImageNet and Cars. To measure the effect of compression without accounting for time, we show accuracy vs. epoch plots in Figure 12 and Figure 13. While compression can itself be viewed as a data augmentation (e.g., removing high frequency features that can possibly cause overfitting), we notice that it does not usually improve accuracy. Rather, most of the gains in time-to-accuracy are from faster image rates. The size in bytes of various levels of scans read. Scan group 0 is shown, which contains only labels and is typically ∼100 bytes. Each scan adds roughly a constant amount of data (i.e., linear scaling), although certain scans add considerably more than others (i.e., sizes sometimes cluster) due to techniques like chroma subsampling. Using all 10 scans can require over an order of magnitude more bandwidth than 1-2 scans. A.2 TIME TO CONVERGENCE TABLE We provide a table of time-to-accuracy in Table 1 This speed, combined with additional optimizations such as multi-core parallelism (e.g., we would expect 4× these rates with 4 cores), suggests that while decoding can be an issue, the penalty from using progressive images can be managed more easily than a storage bottleneck (i.e., compute can usually be traded for storage bandwidth). Further, some of the decoding can actually be moved to an accelerator, like the GPU used for training, something which is already available via nvJPEG 2. Reducing this computational expense by optimizing the implementation or reducing the amount of scans (since our experiments only use 4 distinct scans) is left as future work. Image Loading Rates. We provide image loading rates observed during training in Table 2. Using more scans slows down training significantly, as can be seen in the image rates. It is worth noting that these rates vary considerably during runtime (due to stalls), and ShuffleNetv2 is capable of a higher maximum training rate than ResNet-18. Further, as the number of scans is reduced, image rates approach the maximum achievable by the cluster for each model. Below we describe the characteristics of the used datasets. ImageNet-100 Creation. The ImageNet-100 dataset was constructed by subsampling 100 classes out of the 1000 classes found in the ImageNet ILSVRC dataset . These classes were chosen arbitrarily to limit computation time-they are the first 100 classes of ImageNet in sorted directory listing form i.e., n01440764-n01855672. CelebAHQ-Smile Creation. The CelebAHQ dataset was created as a high quality version of the CelebA dataset . CelebA contains attributes for each face, such as whether the face is smiling or not. CelebAHQ-Smile utilizes these attributes to construct a dataset of 30k faces, where each face is assigned a binary variable for smiling or not. While the CelebA dataset was subsampled to construct CelebAHQ, we do not subsample CelebAHQ further (i.e., we use all 30k images it contains). Record and Image Quality Details. We provide the dataset size details for the encoded datasets in Table 3. As the original (e.g., lossless) images are hard to find, we estimate the JPEQ qual-ity setting of the training set with ImageMagick using identify -format'%Q'. The JPEG quality setting determines the level of frequency quantization outlined in Figure 2. Intuitively, one would expect that higher quality JPEG images could allow more aggressive PCR compression rates for a fixed resolution, since each image has more redundant information on average. ImageNet and HAM10000 both have high quality images. CelebAHQ has lower quality images, but they are downscaled to 256 × 256 for training purposes, which increases the information density in the image (e.g., blurry images can be made to appear less blurry by downsampling), a fact exploited in prior work. Cars is neither high JPEG quality or large resolution. Under-compressing images (perhaps at high resolution) during the initial JPEG compression may allow for a larger range of viable scan groups. We provide bandwidth-optimized record baselines in Figure 14, where we re-encode the images using a statically-chosen level of compression. These baselines re-encode the images with 50% quality and 90% JPEG quality, respectively, to reduce dataset size at a fixed level of fidelity. It is worth noting that re-encoding images compounds with the original JPEG compression, so the re-encoded image quality may be lower than 50% or 90% quality compared to the images in their original lossless form. This is in contrast to PCRs, which losslessly convert the images into a progressive format, which allows dynamic access to the level of fidelity. We observe that both the baseline method of dataset bandwidth reduction and the PCR method can take considerable encoding time, since the encoding time scales proportionally to the dataset size. We also observe that the PCR method is competitive (1.15× to 2.98×) to that of the baseline in terms of encoding time. PCRs avoid having to re-encode a dataset at multiple fidelity levels, and, therefore, they can save both storage space and encoding time. Converting the full ImageNet into record format takes roughly 16× longer than the 6 minutes needed for the 10× smaller subsampled dataset-the PCR conversion is 96 minutes (53 minutes are spent in JPEG conversion). One reason for this additional slowdown is that any system caches (e.g., in the distributed filesystem or the file cache on the converter node) are less likely to see a cache hit due to the working set size being larger. Although the exact conversion times are dependent on implementation, hardware, and the dataset, conversions times can be in the range of one hour of compute time per 100 GB. We provide experiments validating that compression needs vary within the same dataset for different tasks in Figure 15 and Figure 16, which show accuracy and loss, respectively. This experiment simply coarsens the granularity of the classification task, and demonstrates that lower scan groups can be used for tasks which are easier. The full range of classes is used for Baseline (i.e, car make, model, and year create a unique class), only car make is used for Make-Only, and a binary classification task of Corvette detection is used for Is-Corvette. We can see that compared to the original task, the coarse tasks reduce the gap between scan groups, and the binary task closes the gap even more. This suggests that as the tasks get easier, the tolerance for lower scan groups grows. Simply re-assigning the class labels to a coarser class reduces the complexity of the task and closes the accuracy gap across scan groups. A fixed PCR record encoding (i.e., without re-encoding) can support multiple tasks at the optimal quality, whereas static approaches may need one encoding per task. Some training methodologies, such as Progressive GAN training , utilize
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1e0ZlHYDB
We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time
It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist. Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning? In this work, we study this question and propose gradient rescaling (GR) to solve it. GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs. Apart from regularisation, we connect GR to examples weighting and designing robust loss functions. We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels. It is also significantly superior to standard regularisers in both clean and abnormal settings. Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios. DNNs have been successfully applied in diverse applications (; ;). However, their success is heavily reliant on the quality of training data, especially accurate semantic labels for learning supervision. Unfortunately, on the one hand, maintaining the quality of semantic labels as the scale of training data increases is expensive and almost impossible when the scale becomes excessively large. On the other hand, it has been demonstrated that DNNs are capable of memorising the whole training data even when all training labels are random . Therefore, DNNs struggle to discern meaningful data patterns and ignore semantically abnormal examples 1 simultaneously . Consequently, it becomes an inevitable demand for DNNs to hold robustness when training data contains anomalies (; ; ; ; ; ; ;). Recently, great progress has been made towards robustness against anomalies when training DNNs . There are three appealing perspectives in terms of their simplicity and effectiveness: 1) Examples weighting. For example, knowledge distilling from auxiliary models is popular for heuristically designing weighting schemes. However, it is challenging to select and train reliable auxiliary models in practice (; ; ; ; b). 2) Robust loss functions (; ; ; b); 3) Explicit regularisation techniques (; . Although designing robust losses or explicit regularisation is easier and more flexible in practice, the performance is not the optimal yet. 1 One training example is composed of an input and its corresponding label. A semantically abnormal example means the input is semantically unrelated to its label, which may come from corrupted input or label. For example, in Figure 3 in the supplementary material: 1) Out-of-distribution anomalies: An image may contain only or an object which does not belong to any training class; 2) In-distribution anomalies: An image of class a may be annotated to class b or an image may contain more than one semantic object. Regarding examples weighting, there is a core research question which is not well answered yet: What training examples should be focused on and how large the emphasis spread should be? In this work, we present a thorough study of this practical question under different settings. For better analysis, we propose two basic and necessary concepts: emphasis focus and spread with explicit definition in Sec. 3.2. They are conceptually introduced as follows: Emphasis focus. It is a common practice to focus on harder instances when training DNNs . When a dataset is clean, it achieves faster convergence and better performance to emphasise on harder examples because they own larger gradient magnitude, which means more information and a larger update step for model's parameters. However, when severe noise exists, as demonstrated in , DNNs learn simple meaningful patterns first before memorising abnormal ones. In other words, anomalies are harder to fit and own larger gradient magnitude in the later stage. Consequently, if we use the default sample weighting in categorical cross entropy (CCE) where harder samples obtain higher weights, anomalies tend to be fitted well especially when a network has large enough capacity. That is why we need to move the emphasis focus towards relatively easier ones, which serves as emphasis regularisation. Emphasis spread. We term the weighting variance of training examples emphasis spread. The key concept is that we should not treat all examples equally, neither should we let only a few be emphasised and contribute to the training. Therefore, when emphasis focus changes, the emphasis spread should be adjusted accordingly. We integrate emphasis focus and spread into a unified example weighting framework. Emphasis focus defines what training examples own higher weights while emphasis spread indicates how large variance over their weights. Specifically, we propose gradient rescaling (GR), which modifies the magnitude of logit vector's gradient. The logit vector is the output of the last fully connected (FC) layer of a network. We remark that we do not design the weighting scheme heuristically from scratch. Instead, it is naturally motivated by the gradient analysis of several loss functions. Interestingly, GR can be naturally connected to examples weighting, robust losses, explicit regularisation: 1) The gradient magnitude of logit vector can be regarded as weight assignment that is built-in in loss functions (; ; b). Therefore, rescaling the gradient magnitude equals to adjusting the weights of examples; 2) A specific loss function owns a fixed gradient derivation. Adjusting the gradient can be treated as a more direct and flexible way of modifying optimisation objectives; 3) Instead of focusing on harder examples 2 by default, we can adjust emphasis focus to relative easier ones when noise is severe. GR serves as emphasis regularisation and is different from standard regularisers, e.g., L2 weight decay constraints on weight parameters and Dropout samples neural units randomly ; GR is simple yet effective. We demonstrate its effectiveness on diverse computer vision tasks using different net architectures: 1) Image classification with clean training data; 2) Image classification with synthetic symmetric label noise, which is more challenging than asymmetric noise evaluated by (; ; 3) Image classification with real-world unknown anomalies, which may contain open-set noise, e.g., images with only , or outliers, etc.; 4) Video person re-identification, a video retrieval task containing diverse anomalies. Beyond, we show that GR is notably better than other standard regularisers, e.g., L2 weight decay and dropout. Besides, to comprehensively understand GR's behaviours, we present extensive ablation studies. Main contribution. Intuitively and principally, we claim that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting. To the best of our knowledge, we are the first to thoroughly study and analyse them together in a unified framework. Aside from examples weighting, robust losses minimisation and explicit regularisation techniques, there are another two main perspectives for training robust and accurate DNNs when anomalies exist: 2 An example's difficulty can be indicated by its loss (; ;), gradient magnitude , or input-to-label relevance score . The input-to-label relevance score means the probability of an input belonging to its labelled class predicted by a current model. The difficulty of an example may change as the model learns. In summary, higher difficulty, larger loss, larger gradient magnitude, and lower input-to-label relevance score are equal concepts. 1) Robust training strategies (; ; ;); 2) Noise-aware modelling, and alternative label and parameter optimisation are popular when only label noise exists. Some methods focus on noise-aware modelling for correcting noisy labels or empirical losses (; ; ; ; ; ; ; a). However, it is non-trivial and time-consuming to learn a noise-aware model, which also requires prior extra information or some specific assumptions. For example, Masking (a) is assisted by human cognition to speculate the noise structure of noise-aware matrix while (; ; ;) exploit an extra clean dataset, which is a hyper-factor and hard to control in practice. Some other algorithms iteratively train the model and infer latent true labels ). Those methods have made great progress on label noise. But they are not directly applicable to unknown diverse semantic anomalies, which covers both out-of-distribution and in-distribution cases. We note that proposed some theorems showing that empirical risk minimization is robust when the loss function is symmetric and the noise type is label noise. However, they are not applicable for deep learning under arbitrary unknown noise: 1) We remark that we target at the problem of diverse or arbitrary abnormal examples, where an input may be out-of-distribution, i.e., not belonging to any training class. As a , the symmetric losses custom-designed for label noise are not applicable. 2) GR is independent of empirical loss expressions as presented in Table 1. Therefore, one specific loss is merely an indicator of how far we are away from a specific minimisation objective. It has no impact on the robustness of the learning process since it has no direct influence on the gradient back-propagation. Similar to the prior work of rethinking generalisation , we need to rethink robust training under diverse anomalies, where the robustness theorems conditioned on symmetric losses and label noise are not directly applicable. Notation. We are given N training examples, where (x i, y i) denotes i−th sample with input x i ∈ R D and label y i ∈ {1, 2, ..., C}. C is the number of classes. Let's consider a deep neural network z composed of an embedding network f (·): Generally, the linear classifier is the last FC layer which produces the final output of z, i.e., logit vector z ∈ R C. To obtain probabilities of a sample belonging to different classes, logit vector is normalised by a softmax function: p(j|x i) is the probability of x i belonging to class j. A sample's input-to-label relevance score is defined by In what follows, we will uncover the sample weighting in popular losses: CCE, Mean Absolute Error (MAE) and Generalised Cross Entropy (GCE) . CCE. The CCE loss with respect to (x i, y i), and its gradient with respect to z ij are defined as: Therefore, we have || Here we choose L1 norm to measure the magnitude of gradient because of its simpler statistics and computation. Since we back-propagate ∂L CCE /z i to update the model's parameters, an example's gradient magnitude determines how much impact it has, i.e., its weight w In CCE, more difficult examples with smaller p i get higher weight. When it comes to MAE, the loss of (x i, y i) and gradient with respect to z im are: (a) GR, CCE, MAE, GCE. We show 3 settings of GR: (β = 2, λ = 0), (β = 8, λ = 0.5) and (β = 12, λ = 1). Their corresponding emphasis focuses are 0, 0∼0.5 and 0.5. (b) GR when fixing λ = 0.5 (emphasis focus is within 0∼0.5) or λ = 2 (emphasis focus is within 0.5∼1). (c) GR when fixing β = 8. When λ increases, the emphasis focus moves towards 1 and emphasis spread drops. Figure 1: A sample's weight w i along with its input-to-label relevance score p i. GR is a unified sample reweighting framework from the perspective of gradient rescaling, where the emphasis focus and spread can be adjusted by choosing proper λ and β in practice. Better viewed in colour. Therefore, w In MAE, those images whose input-to-label relevance scores are 0.5 become the emphasis focus. GCE. In GCE, the loss calculation of (x i, y i) and gradient with respect to logit vector z i are: where In this case, the emphasis focus can be adjusted from 0 to 0.5 when q ranges from 0 to 1. However, in their practice , instead of using this naive version, a truncated one is applied: The loss of an example with p i ≤ 0.5 is constant so that its gradient is zero, which means it is dropped and does not contribute to the training. The main drawback is that at the initial stage, the model is not well learned so that the predicted p i of most samples are smaller than 0.5. To address it, alternative convex search is exploited for iterative data pruning and parameters optimisation, making it quite complex and less appealing in practice. The derivation details of Eq.,, are presented in Section B of the supplementary material. A loss function provides supervision information by its derivative with respect to a network's output. Therefore, there are two perspectives for improving the supervision information: 1) Modifying the loss format to improve its corresponding derivative; 2) Manipulating the gradient straightforwardly. In this work, we choose to control the gradient, which is more direct and flexible. According to Eq.,,, the gradients of CCE, MAE and GCE share the same direction. Our proposal GR unifies them from the gradient perspective. Being independent of loss formulas, a sample's gradient is rescaled linearly so that its weight is w GR i: where λ, β are hyper-parameters for controlling the emphasis focus and spread, respectively. By choosing a larger λ when more anomalies exist, GR regularises examples weighting by moving emphasis focus toward relatively easier training data points, thus embracing noise-robustness. For clarification, we explicitly define the emphasis focus and spread over training examples: Definition 1 (Emphasis Focus ψ). The emphasis focus refers to those examples that own the largest weight. Since an example's weight is determined by its input-to-label relevance score p i, for simplicity, we define the emphasis focus to be an input-to-label score to which the largest weight is assigned, i.e., ψ = arg max Definition 2 (Emphasis Spread σ). The emphasis spread is the weight variance over all training instances in a mini-batch, i.e., σ = E((w, where E(·) denotes the expectation value of a variable. With these definitions, we differentiate GR with other methods in Table 1. We show the sample weighting curves of GR with different settings in Figure 1. As shown in Figure 1c, the emphasis spread declines as λ increases. Therefore, we choose larger β values when λ is larger in Sec. 4.2.1. Principally, transformation g could be designed as any monotonically increasing function. Because the non-linear exponential mapping can change the overall weights' variance and relative weights between any two examples, we choose g(·) = exp(·), which works well in our practice. By integral, the exact loss format is an error function (non-elementary). We summarise several existing cases as follows (the ellipsis refers to other potential options which can be explored in the future): Let's regard a deep network z as a black box, which produces C logits. C is the class number. Then during gradient back-propagation, an example's impact on the update of z is determined by its gradient w.r.t. the logit vector. The impact can be decomposed into two factors, i.e., gradient direction and magnitude. To reduce the impact of a noisy sample, we can either reduce its gradient magnitude or amend its gradient direction. In this work, inspired by the analysis of CCE, MAE and GCE, which only differ in the gradient magnitude while perform quite differently, leading to a natural motivation that gradient magnitude matters. That is why we explore rescaling the gradient magnitude as illustrated in Figure 1. It is worth studying amending gradient directions in the future. Datasets. We test on CIFAR-10 and CIFAR-100 , which contain 10 and 100 classes, respectively. In CIFAR-10, the training data contains 5k images per class while the test set includes 1k images per class. In CIFAR-100, there are 500 images per class for training and 100 images per class for testing. Implementation details. On CIFAR-10, following , we adopt ResNet-20 and ResNet-56 as backbones so that we can compare fairly with their reported . On CIFAR-100, we follow D2L to choose ResNet-44 and compare with its reported . We also use an SGD optimiser with momentum 0.9 and weight decay 10 −4. The learning rate is initialised with 0.1, and multiplied with 0.1 every 5k iterations. We apply the standard data augmentation as in (; : The original images are padded with 4 pixels on every side, followed by a random crop of 32 × 32 and horizontal flip. The batch size is 128. Table 2 : Classification accuracies (%) of CCE, and GR on clean CIFAR-10 and CIFAR-100. λ = 0 means the emphasis focus is 0 where we fix β = 2. β = 0 means all examples are treated equally. Backbone CCE GR (λ = 0) GR (β = 0) Results. Our purpose is to show GR can achieve competitive performance with CCE under clean data to demonstrate its general applicability. As reported in D2L, all noise-tolerant proposals (; perform similarly with CCE when training labels are clean. Therefore we do not present other related competitors here. Our reimplemented are shown in Table 2 . For reference, the reported in on CIFAR-10 with CCE are 91.3% for ResNet-20 and 93.0% for ResNet-56. In D2L, the on CIFAR-100 with ResNet-44 is 68.2%. Our reimplemented performance of CCE is only slightly different. For GR, we observe the best performance when emphasis focus is 0, i.e., λ = 0. Furthermore, it is insensitive to a wide range of emphasis spreads according to our observations in Figure 5 in the supplementary material. Treating training examples equally. As shown in Table 2, we obtain competitive performance by treating all training examples equally when β = 0. This is quite interesting and motivates us that sample differentiation and reweighting work much better only when noise exists. Symmetric noise generation. Given a probability r, the original label of an image is changed to one of the other class labels uniformly following (; . r denotes the noise rate. Symmetric label noise generally exists in large-scale real-world applications where the dataset scale is so large that label quality is hard to guarantee. It is also demonstrated in that it is more challenging than asymmetric noisy labels ), which assume that label errors only exist within a predefined set of similar classes. All augmented training examples share the same label as the original one. To understand GR well empirically, we explore the behaviours of GR on CIFAR-10 with r = 20%, 40%, 60%, 80%, respectively. We use ResNet-56 which has larger capacity than ResNet-20. Design choices. We mainly analyse the impact of different emphasis focuses for different noise rates. We explore 5 emphasis focuses by setting β = 0 or different λ: 1) None: β = 0. There is no emphasis focus since all examples are treated equally; 2) 0: λ = 0; 3) 0∼0.5: λ = 0.5; 4) 0.5: λ = 1; 5) 0.5∼1: λ = 2. We remark that when λ is larger, the emphasis focus is higher, leading to relatively easier training data points are emphasised. As shown in Figure 1, when emphasis focus changes, emphasis spread changes accordingly. Therefore, to set a proper spread for each emphasis focus, we try 4 emphasis spread and choose the best one 3 to compare the impact of emphasis focus. Results analysis. We show the in Table 3. The intact training set serves as a validation set and we observe that its accuracy is always consistent with the final test accuracy. This motivates us that we can choose our model's hyper-parameters β, λ via a validation set in practice. We display the training dynamics in Figure 2. We summarise our observations as follows: Fitting and generalisation. We observe that CCE always achieves the best accuracy on corrupted training sets, which indicates that CCE has a strong data fitting ability even if there is severe noise . As a , CCE has much worse final test accuracy than most models. Emphasising on harder examples. When there exist abnormal training examples, we obtain the worst final test accuracy if emphasis focus is 0, i.e., CCE and GR with λ = 0. This unveils that in applications where we have to learn from noisy training data, it will hurt the model's generalisation dramatically if we use CCE or simply focus on harder training data points. Emphasis focus. When noise rate is 0, 20%, 40%, 60%, and 80%, we obtain the best final test accuracy when λ = 0, λ = 0.5, λ = 1, λ = 2, and λ = 2, respectively. This demonstrates that when noise rate is higher, we can improve a model's robustness by moving emphasis focus towards relatively less difficult examples with a larger λ, which is informative in practice. Emphasis spread. As displayed in Table 3 and Figures 7-10 in the supplementary material, emphasis spread also matters a lot when fixing emphasis focus, i.e., fixing λ. For example in Table 3, when λ = 0, although focusing on harder examples similarly with CCE, GR can outperform CCE by modifying the emphasis spread. As shown in Figures 7-10, some models even collapse and cannot converge if the emphasis spread is not rational. Implementation details. We follow the same settings as MentorNet to compare fairly with its reported . Optimiser and data augmentation are described in Section 4.1. Competitors. FullModel is the standard CCE trained using L2 weight decay and dropout . Forgetting searches the dropout parameter in the range of (0.2-0.9). Self-paced , Focal Loss , and MentorNet are representatives of example reweighting algorithms. Reed Soft ) is a weaklysupervised learning method. All methods use GoogLeNet V1. Results. We compare the under different noise rates in Table 4. GR with fixed hyperparameters β = 8, λ = 0.5 outperforms the state-of-the-art GCE by a large margin, especially when label noise becomes severe. Better can be expected when optimising the hyper-parameters for each case. We remark that FullModel (naive CCE) was trained with L2 weight decay and dropout. However, GR's regularization effect is much better in both clean and noisy cases. Figure 6 in the supplementary material. We have two key observations: 1) When noise rate increases, better generalisation is obtained with higher emphasis focus, i.e., focusing on relatively easier examples; 2) Both overfitting and underfitting lead to bad generalisation. For example,'CCE: 0' fits training data much better than the others while'GR: None' generally fits it unstably or a lot worse. Better viewed in colour. Implementation details. Most baselines have been reimplemented in with the same settings. Therefore, for direct comparison, we follow exactly their experimental configurations and use ResNet-44 . Optimiser and data augmentation are described in Section 4.1. We repeat training and evaluation 5 times where different random seeds are used for generating noisy labels and model's initialisation. The mean test accuracy and standard deviation are reported. Competitors. We compare with D2L, GCE , and other baselines reimplemented in D2L: 1) Standard CCE; 2) Forward uses a noise-transition matrix to multiply the network's predictions for label correction; 3) Backward applies the noise-transition matrix to multiply the CCE losses for loss correction; 4) Bootstrapping trains models with new labels generated by a convex combination of the original ones and their predictions. The convex combination can be soft (Boot-soft) or hard (Boot-hard); 5) D2L achieves noise-robustness from a novel perspective of restricting the dimensionality expansion of learned subspaces during training and is the state-of-the-art; 6) Since GCE outperforms MAE , we only reimplement GCE for comparison; 7) SL (c) boosts CCE symmetrically with a noise-robust counterpart, i.e., reverse cross entropy. Results. We compare the of GR and other algorithms in Table 5. GR outperforms other competitors by a large margin, especially when label noise is severe, e.g., r = 40% and 60%. More importantly, we highlight that GR is much simpler without any extra information. Compared with Forward and Backward, GR does not need any prior knowledge about the noise-transition matrix. Bootstrapping targets at label correction and is time-consuming. D2L estimates the local intrinsic dimensionality every b mini-batches and checks the turning point for dimensionality expansion every e epochs. However, b and e are difficult to choose and iterative monitoring is time-consuming. Dataset. Clothing 1M contains 1 million images. It is an industrial-level dataset and its noise structure is agnostic. According to , around 61.54% training labels are reliable, i.e., the noise rate is about 38.46%. There are 14 classes from several online shopping websites. In addition, there are 50k, 14k, and 10k images with clean labels for training, validation, Table 5: The accuracies (%) of GR and recent approaches on CIFAR-100. The of fixed parameters (β = 8, λ = 0.5) are shown in the second last column. With a little effort for optimising β and λ, the and corresponding parameters are presented in the last column. The trend is consistent with Table 3: When r raises, we can increase β, λ for better robustness. The increasing scale is much smaller. This is because CIFAR-100 has 100 classes so that its distribution of p i (input-to-label relevance score) is different from CIFAR-10 after softmax normalisation. and testing, respectively. Here, we follow and compare with existing methods that only learn from noisy training data since we would like to avoid exploiting auxiliary information. Implementation details. We train ResNet-50 and follow exactly the same settings as : 1) Initialisation: ResNet-50 is initialised by publicly available model pretrained on ImageNet ; 2) Optimisation: A SGD optimiser with a momentum of 0.9 and a weight decay of 10 −3 is applied. The learning rate starts at 10 −3 and is divided by 10 after 5 epochs. Training terminates at 10 epochs; 3) Standard data augmentation: We first resize a raw input image to 256 × 256, and then crop it randomly at 224 × 224 followed by random horizontal flipping. The batch size is 64 due to memory limitation. Since the noise rate is around 38.46%, we simply set λ = 1, β = 16 following Table 3 when noise rate is 40%. Competitors. We compare with other noise-robust algorithms that have been evaluated on Clothing 1M with similar settings: 1) Standard CCE ; 2) Since Forward outperforms Backward on Clothing 1M , we only present the of Forward; 3) S-adaptation applies an additional softmax layer to estimate the noise-transition matrix ; 4) Masking is a human-assisted approach that conveys human cognition to speculate the structure of the noise-transition matrix (a). 5) Label optimisation learns latent true labels and model's parameters iteratively. Two regularisation terms are added for label optimisation and adjusted in practice. Results. The are compared in Table 6. Under real-world agnostic noise, GR also outperforms the state-of-the-art. It is worth mentioning that the burden of noise-transition matrix estimation in Forward and S-adaptation is heavy due to alternative optimisation steps, and such estimation is non-trivial without big enough data. Masking exploits human cognition of a structure prior and reduces the burden of estimation, nonetheless its performance is not competitive. Similarly, Label Optimisation requires alternative optimisation steps and is time-consuming. Dataset and evaluation settings. MARS contains 20,715 videos of 1,261 persons . There are 1,067,516 frames in total. Because person videos are collected by tracking and detection algorithms, abnormal examples exist as shown in Figure 3 in the supplementary material. We remark that there are some anomalies containing only or an out-of-distribution person. Exact noise type and rate are unknown. Following standard settings, we use 8,298 videos of 625 persons for training and 12,180 videos of the other 636 persons for testing. We report the cumulated matching characteristics (CMC) and mean average precision (mAP) . and (c), respectively. CCE* and GCE* are our reproduced using the Caffe framework (Implementation details. Following (; a), we train GoogleNet V2 and treat a video as an image set, which means we use only appearance information without exploiting latent temporal information. A video's representation is simply the average fusion of its frames' representations. The learning rate starts from 0.01 and is divided by 2 every 10k iterations. We stop training at 50k iterations. We apply an SGD optimiser with a weight decay of 0.0005 and a momentum of 0.9. The batch size is 180. We use standard data augmentation: a 227 × 227 crop is randomly sampled and flipped after resizing an original image to 256 × 256. Training settings are the same for each method. We implement GCE with its reported best settings. At testing, following (a; ;), we first L 2 normalise videos' features and then calculate the cosine similarity between every two of them. Results. The are displayed in Table 7. Although DRSA and CAE exploit extra temporal information by incorporating attention mechanisms, GR is superior to them in terms of both effectiveness and simplicity. OSM+CAA (a) is the only comparable method. However, OSM+CAA combines CCE and weighted contrastive loss to address anomalies, thus being more complex than GR. In addition, we highlight that one query may have multiple matching instances in the MARS benchmark. Consequently, mAP is a more reliable and accurate performance assessment. GR is the best in terms of mAP. In Table 8, we compare our proposed regulariser GR with other standard ones, i.e., L2 weight decay and Dropout . We set the dropout rate to 0.2 and L2 weight decay rate to 10 −4. For GR, as mentioned in Section 4.2.3, we fix β = 8, λ = 0.5. Interestingly, Dropout+L2 achieves 52.8% accuracy, which is even better than the state-of-the-art in Table 5, i.e., D2L with 52.0% accuracy. However, GR is better than those standard regularisers and their combinations significantly. GR works best when it is together with L2 weight decay. Table 8: Results of GR and other standard regularisers on CIFAR-100. We set r = 40%, i.e., the label noise is severe but not belongs to the majority. We train ResNet-44. We report the average test accuracy and standard deviation (%) over 5 trials. Baseline means CCE without regularisation. In this work, we present three main contributions: 1) We analyse and answer a core research question: What training examples should be focused on and how large the emphasis spread should be? 2) We uncover and analyse that two basic factors, emphasis focus and spread, should be babysat simultaneously when it comes to examples weighting. Consequently, we propose a simple yet effective gradient rescaling framework serving as emphasis regularisation. 3) Extensive experiments on different tasks using different network architectures are reported for better understanding and demonstration of GR's effectiveness, which are also valuable for applying GR in practice. . Out-of-distribution anomalies: 1) The first image in the 3rd row contains only and no semantic information at all. 2) The 2nd first image or the last one in the 3rd row may contain a person that does not belong to any person in the training set. In-distribution anomalies: 1) Some images of deer class are wrongly annotated to horse class. 2) We cannot decide the object of interest without any prior when an image contains more than one object, e.g., some images contain two persons in the 2nd row. For left and right sides of Eq., we calculate their derivatives w.r.t. z ij simultaneously. If j = y i, In summary, the derivation of softmax layer is: B.2 DERIVATION OF CCE According to Eq., we have Therefore, we obtain (the parameters are omitted for brevity), B.3 DERIVATION OF MAE According to Eq., we have Therefore, we obtain According to Eq., we have Therefore, we obtain B.5 DERIVATIVES W.R.T. LOGITS z i The calculation is based on Eq. and Eq.. If j = y i, we have: If j = y i, it becomes: In summary, ∂L CCE /∂z i can be represented as: otherwise (j = y i): In summary, ∂L MAE /∂z i is: The calculation is based on Eq. and Eq.. If j = y i, we have: If j = y i, it becomes: In summary, ∂L GCE /∂z i can be represented as: C SMALL-SCALE FINE-GRAINED VISUAL CATEGORISATION OF VEHICLES How does GR perform on small datasets, for example, the number of data points is no more than 5,000? We have tested GR on CIFAR-10 and CIFAR-100 in the main paper. However, both of them contain a training set of 50,000 images. For this question, we answer it from different perspectives as follows: 1. The problem of label noise we study on CIFAR-10 and CIFAR-100 in Section 4.2 is of similar scale. For example: • In Table 4, when noise rate is 80% on CIFAR-10, the number of clean training examples is around 50, 000 × 20% = 5, 000 × 2. Therefore, this clean set is only two times as large as 5,000. Beyond, the learning process may be interrupted by other noisy data points. • In Table 5, when noise rate is 60% on CIFAR-100, the number of clean training data points is about 50, 000 × 40% = 5, 000 × 4, i.e., four times as large as 5,000. 2. We compare GR with other standard regularisers on a small-scale fine-grained visual categorisation problem in Table 9. Vehicles-10 Dataset. In CIFAR-100 , there are 20 coarse classes, including vehicles 1 and 2. Vehicles 1 contains 5 fine classes: bicycle, bus, motorcycle, pickup truck, and train. Vehicles 2 includes another 5 fine classes: lawn-mower, rocket, streetcar, tank, and tractor. We build a small-scale vehicles classification dataset composed of these 10 vehicles from CIFAR-100. Specifically, the training set contains 500 images per vehicle class while the testing set has 100 images per class. Therefore, the number of training data points is 5,000 in total. We evaluate on CIFAR-100, whose 100 classes are grouped into 20 coarse classes. Every coarse class has 5 fine classes. Within each coarse class, an image's label is flipped to one of the other four labels uniformly with a probability r. r represents the noise rate. We set r = 0.2. The are displayed in Table 10. When GR is used, the performance is better than its counterparts without GR. The are shown in Table 11. Proposal: Gradient rescaling incorporates emphasis focus (centre/focal point) and emphasis spread, and serves as explicit regularisation in terms of sample reweighting/emphasis. Finding: When noise rate is higher, we can improve a model's robustness by moving emphasis focus towards relatively less difficult examples. The more detailed on CIFAR-100 are shown in Table 12, which is the supplementary of Table 5 in the main text. Table 12: Exploration of GR with different emphasis focuses (centres) and spreads on CIFAR-100 when r = 20%, 40%, 60%, respectively. This table presents detailed information of optimising λ, β mentioned in Table 5 in the paper. Specifically, for each λ, we try 5 β values from {2, 4, 6, 8, 10} and select the best one as the final of the λ. We report the mean test accuracy over 5 repetitions. Our key finding is demonstrated again: When r raises, we can increase β, λ for better robustness. The increasing scale is much smaller than CIFAR-10. This is because CIFAR-100 has 100 classes so that its distribution of p i (input-to-label relevance score) is different from CIFAR-10 after softmax normalisation. Figure 2 in the paper. We have two key observations: 1) When noise rate increases, better generalisation is obtained with higher emphasis focus, i.e., focusing on relatively easier examples; 2) Both overfitting and underfitting lead to bad generalisation. For example,'CCE: 0' fits training data much better than the others while'GR: None' generally fits it unstably or a lot worse. Better viewed in colour.. From left to right, the of four emphasis focuses 0, 0∼0.5, 0.5, 0.5∼1 with different emphasis spreads are displayed in each column respectively. When λ is larger, β should be larger as displayed in Figure 1c in the paper. Specifically: 1) when λ = 0: we tried β = 0.5, 1, 2, 4; 2) when λ = 0.5: we tried β = 4, 8, 12, 16; 3) when λ = 1: we tried β = 8, 12, 16, 20; 4) when λ = 2: we tried β = 12, 16, 20, 24.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylUOn4Yvr
ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE
Generative Adversarial Networks (GANs) have achieved remarkable in the task of generating realistic natural images. In most applications, GAN models share two aspects in common. On the one hand, GANs training involves solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions. On the other hand, the generator and the discriminator are parametrized in terms of deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators without using discriminators, thus avoiding the instability of adversarial optimization problems. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors. Generative Adversarial Networks (GANs) BID15 are a powerful framework to learn generative models of natural images. GANs learn these generative models by setting up an adversarial game between two learning machines. On the one hand, a generator plays to transform noise vectors into fake samples, which resemble real samples drawn from a distribution of natural images. On the other hand, a discriminator plays to distinguish between real and fake samples. During training, the generator and the discriminator learn in turns. First, the discriminator learns to assign high scores to real samples, and low scores to fake samples. Then, the generator learns to increase the scores of fake samples, as to fool the discriminator. After proper training, the generator is able to produce realistic natural images from noise vectors. Recently, GANs have been used to produce high-quality images resembling handwritten digits, human faces, and house interiors BID36. Furthermore, GANs exhibit three strong signs of generalization. First, the generator translates linear interpolations in the noise space into semantic interpolations in the image space. In other words, a linear interpolation in the noise space will generate a smooth interpolation of visually-appealing images. Second, the generator allows linear arithmetic in the noise space. Similarly to word embeddings BID31, linear arithmetic indicates that the generator organizes the noise space to disentangle the nonlinear factors of variation of natural images into linear statistics. Third, the generator is able to to synthesize new images that resemble those of the data distribution. This allows for applications such as image in-painting BID18 and super-resolution BID26.Despite their success, training and evaluating GANs is notoriously difficult. The adversarial optimization problem implemented by GANs is sensitive to random initialization, architectural choices, and hyper-parameter settings. In many cases, a fair amount of human care is necessary to find the correct configuration to train a GAN in a particular dataset. It is common to observe generators with similar architectures and hyper-parameters to exhibit dramatically different behaviors. Even when properly trained, the ing generator may synthesize samples that resemble only a few localized regions (or modes) of the data distribution BID14. While several advances have been made to stabilize the training of GANs BID37, this task remains more art than science. The difficulty of training GANs is aggravated by the challenges in their evaluation: since evaluating the likelihood of a GAN with respect to the data is an intractable problem, the current gold standard to evaluate the quality of GANs is to eyeball the samples produced by the generator. The evaluation of discriminators is also difficult, since their visual features do not always transfer well to supervised tasks BID12 BID13. Finally, the application of GANs to non-image data has been relatively limited. Research question To model natural images with GANs, the generator and discriminator are commonly parametrized as deep Convolutional Networks (convnets) BID24. Therefore, it is reasonable to hypothesize that the reasons for the success of GANs in modeling natural images come from two complementary sources: (A1) Leveraging the powerful inductive bias of deep convnets. (A2) The adversarial training protocol. This work attempts to disentangle the factors of success (A1) and (A2) in GAN models. Specifically, we propose and study one algorithm that relies on (A1) and avoids (A2), but still obtains competitive when compared to a GAN. We investigate the importance of the inductive bias of convnets by removing the adversarial training protocol of GANs (Section 2). Our approach, called Generative Latent Optimization (GLO), maps one learnable noise vector to each of the images in our dataset by minimizing a simple reconstruction loss. Since we are predicting images from learnable noise, GLO borrows inspiration from recent methods to predict learnable noise from images BID3. Alternatively, one can understand GLO as an auto-encoder where the latent representation is not produced by a parametric encoder, but learned freely in a non-parametric manner. In contrast to GANs, we track of the correspondence between each learned noise vector and the image that it represents. Hence, the goal of GLO is to find a meaningful organization of the noise vectors, such that they can be mapped to their target images. To turn GLO into a generative model, we observe that it suffices to learn a simple probability distribution on the learned noise vectors. In our experiments (Section 3), we show that GLO inherits many of the appealing features of GANs, while enjoying a much simpler training protocol. In particular, we study the efficacy of GLO to compress and decompress a dataset of images (Section 3.3.1), generate new samples (Section 3.3.2), perform linear interpolations and extrapolations in the noise space (Section 3.3.3), and perform linear arithmetics (Section 3.3.5). Our experiments provide quantitative and qualitative comparisons to Principal Component Analysis (PCA), Variational Autoencoders (VAE) and GANs. We focus on the CelebA and LSUN-Bedroom datasets. We conclude our exposition in Section 5. First, we consider a large set of images {x 1, . . ., x N}, where each image x i ∈ X has dimensions 3 × w×h. Second, we initialize a set of d-dimensional random vectors {z 1, . . ., z N}, where z i ∈ Z ⊆ R d for all i = 1,... N. Third, we pair the dataset of images with the random vectors, obtaining the dataset {(z 1, x 1),..., (z N, x N)}. Finally, we jointly learn the parameters θ in Θ of a generator g θ: Z → X and the optimal noise vector z i for each image x i, by solving: DISPLAYFORM0 In the previous,: X × X is a loss function measuring the reconstruction error from g(z i) to x i. We call this model Generative Latent Optimization (GLO).Learnable z i. In contrast to autoencoders BID6, which assume a parametric model f: X → Z, usually referred to as the encoder, to compute the vector z from samples x, and minimize the reconstruction loss (g(f (x)), x), in GLO we jointly optimize the inputs z 1,..., z N and the model parameter θ. Since the vector z is a free parameter, our model can recover all the solutions that could be found by an autoencoder, and reach some others. In a nutshell, GLO can be viewed as an "encoder-less" autoencoder, or as a "discriminator-less" GAN. Figure 1: Plot of the cumulative sum of the singular values of the optimal Z * matrix. We observe that the proposed GLO model has a better conditioned covariance matrix and therefore better fills the latent space. Choice of Z. The representation space Z should encapsulate all of our prior knowledge about the data {x 1, . . ., x N}. Since we are interested in matching the properties of GANs, we make similar choices to them when it comes to the representation space Z. The most common choices of the representation space for GANs are either the uniform distribution on the hypercube DISPLAYFORM1 In previous literature, Gaussian distributions lead to more stable GAN training BID36, we will take this choice to design our representation space. In GLO, the random vectors z are learnable and can therefore attain any value during the training process. To avoid extremely large values, we normalize our learnable noise vectors z at all times, to lay on the unit 2 sphere. Choice of loss function. On the one hand, the squared-loss function 2 (x, x) = x − x 2 2 is a simple choice, but leads to blurry (average) reconstructions of natural images. On the other hand, GANs use a convnet (the discriminator) as loss function. Since the early layers of convnets focus on edges, the samples from a GAN are sharper. Therefore, our experiments provide quantitative and qualitative comparisons between the 2 loss and the Laplacian pyramid Lap 1 loss DISPLAYFORM2 where L j (x) is the j-th level of the Laplacian pyramid representation of x BID27. Therefore, the Lap 1 loss weights the details at fine scales more heavily. In order to low-frequency content such as color information, we will use a weighted combination of the Lap 1 and the 2 costs. Optimization. For any choice of differentiable generator, the objective is differentiable with respect to z, and θ. Therefore, we will learn z and θ by Stochastic Gradient Descent (SGD). The gradient of with respect to z can be obtained by backpropagating the gradients through the generator function BID4. We project each z back to the representation space Z after each update. To have noise vectors laying on the unit 2 sphere, we project z after each update by dividing its value by max(z 2, 1). We initialize the z by either sampling them from a gaussian distribution or by taking the whitened PCA of the raw image pixels. We organized our experiments as follows. First, Section 3.1 describes the generative models that we compare, along with their implementation details. Section 3.2 reviews the image datasets used in our experiments. Section 3.3 discusses the of our experiments, including the compression of datasets (Section 3.3.1), the generation (Section 3.3.2) and interpolation (Section 3.3.3) of samples, and the of arithmetic operations with noise vectors (Section 3.3.5). First, PCA BID33, equivalent to a linear autoencoder BID1, where we retain the top 256 principal components. Second, DCGAN BID36. Since GANs do not come with a mechanism (inverse generator) to retrieve the random vector g −1 (x) associated with an image x, we estimate this random vector by 1) instantiating a random vector z 0, and 2) computing updates DISPLAYFORM0 by backpropagation until convergence, where is either the 2 or the Lap 1 loss. Our experiments measuring reconstruction error are disadvantageous to GANs, since these are models that are not trained to minimize this metric. We use the Adam optimizer BID20 with the parameters from BID36 Table 1: Reconstruction errors in MSE. We consider methods using both MSE and Lap 1 loss. We also specify the initialization method between random and PCA.Third, VAE BID21. We train a VAE with the same encoder and decoder architectures as DCGAN. We train it with the default hyper-parameters for 25 epochs. Third, GLO (proposed model). We will train a GLO model where the generator follows the same architecture as the generator in DCGAN. We use Stochastic Gradient Descent (SGD) to optimize both θ and z, setting the learning rate for θ at 1 and the learning rate of z at 10. After each udpdate, the noise vectors z are projected to the unit 2 sphere. In the sequel, we initialize the random vectors of GLO using a Gaussian distribution (for the CelebA dataset) or the top d principal components (for the LSUN dataset). We evaluate all models on two datasets of natural images. Unless specified otherwise, we use the prescribed training splits to train our generative models. All the images are rescaled to have three channels, center-cropped, and normalized to pixel values in [−1, +1].First, CelebA BID29 is a set of 202, 599 portraits of celebrities. We use the aligned and cropped version, scaled to 128 × 128 pixels. Second, LSUN BID44 ) is a set of millions of images of scenes belonging to different scene categories. Following the tradition in GAN papers BID36, we use the 3, 033, 042 images belonging to the bedroom category. We resize the images to 64 × 64 pixels. We compare the methods described on Section 3.1 when applied to the datasets described on Section 3.2. In particular, we evaluate the performance of the methods in the tasks of compressing a dataset, generating new samples, performing sample interpolation, and doing sample arithmetic. We start by measuring the reconstruction error in terms of the mean-squared loss 2 (x, x) = x−x 2 2 and the Lap 1 loss Table 1 shows the reconstruction error of all models and datasets for the 2. This gives a rough idea about the coverage of each model over the dataset. Figure 1 show the quantity of the representation space explained as a function of the number of eigenvectors used to reconstruct it. GLO trained from a random initialization is more aggressive about using the full representation space to spread the information around while PCA or autoencoders tend to concentrate the information in a few directions. For completeness, we computed image reconstructions for the various models on a held-out set of images. To this end we use face images from deep funneled images from Labeled Faces in the Wild BID17. In order to make the images similar to those found in CelebA we crop the images so as to align the location of eyes. The reconstructions of a random sample of images are presented in Fig. 10. Figure 4 shows samples from the each of the models on the CelebA dataset, and Figure 5 shows the same fro the LSUN dataset. In the case of GLO, we fit a Gaussian distribution with full covariance to the representation space Z, and sample from such distribution to generate new samples. We can see that the samples are visually appealing even when placing such a simple probabilistic model on the representation space. We leave more careful modeling of Z for future work. The latent space can be explored by decomposing the covariance matrix of the latent vectors and moving along the eigenvectors associated with the largest eigenvalues from an image. The ing image transformation often contains information about attributes that varies in the dataset. Figure 8 show some examples of image deformation along the principal axes. The image in the middle is the original image. Moving in either direction along an axis produces the images on its left and its right. We see that the main axes seem to contain information about standard face attributes. For example, the 4th component seems to be capturing information about facial expression while the 9th one seems to be capturing information about the age. In absence of supervision, some directions make several attributes move simultaneously, for example smiling seems correlated with the hair color. These correlations are artifacts of the CelebA dataset distribution. In the spirit of BID36, we showcase the effect of simple arithmetic operations in the noise space of the various models. More precisely, we average the noise vector of three images of men wearing sunglasses, remove the average noise vector of three images of men not wearing sunglasses, and add the average noise vector of three images of women not wearing sunglasses. The ing image resembles a woman wearing sunglasses glasses, as shown in Figure 9. Generative Adversarial Networks. GANs were introduced by BID15, and refined in multiple recent works BID11 BID36 BID47 BID37. As described in Section 1, GANs construct a generative model of a probability distribution P by setting up an adversarial game between a generator g and a discriminator d: DISPLAYFORM0 In practice, most of the applications of GANs concern modeling distributions of natural images. In these cases, both the generator g and the discriminator d are parametrized as deep convnets BID24. Among the multiple architectural variations explored in the literature, the most prominent is the Deep Convolutional Generative Adversarial Network (DCGAN) BID36. Therefore, in this paper we will use the specification of the generator function of the DCGAN to construct the generator of GLO across all of our experiments. Autoencoders. In their simplest form, an Auto-Encoder (AE) is a pair of neural networks, formed by an encoder f: X → Z and a decoder g: Z → X. The role of an autoencoder is the compress the data {x 1, . . ., x N} into the representation {z 1, . . ., z N} using the encoder f (x i), and decompress it using the decoder g(f (x i)). Therefore, autoencoders minimize E x∼P (g(f (x)), x), where: X ×X is a simple loss function, such as the mean squared error. There is a vast literature on autoencoders, spanning three decades from their conception BID6 BID1, renaissance BID16, and recent probabilistic extensions BID43 BID21.Several works have combined GANs with AEs. For instance, BID47 replace the discriminator of a GAN by an AE, and BID42 replace the decoder of an AE by a generator of a GAN. Similar to GLO, these works suggest that the combination of standard pipelines can lead to good generative models. In this work we attempt one step further, to explore if learning a generator alone is possible. Inverting generators. Several works attempt at recovering the latent representation of an image with respect to a generator. In particular, BID28; BID48 show that it is possible to recover z from a generated sample. Similarly, BID10 show that it is possible to learn the inverse transformation of a generator. These works are similar to BID45, where the gradients of a particular feature of a convnet are back-propagated to the pixel space in order to visualize what that feature stands for. From a theoretical perspective, BID7 explore the theoretical conditions for a network to be invertible. All of these inverting efforts are instances of the pre-image problem, BID22. BID4 have recently showed that it is possible to recover from a trained generator with compressed sensing. Similar to our work, they use a 2 loss and backpropagate the gradient to the low rank distribution. However, they do not train the generator simultaneously. Jointly learning the representation and training the generator allows us to extend their findings. BID38 also use generative models to compress images.1st eigenvector 2nd eigenvector 4th eigenvector 9th eigenvector Figure 8: Illustration of the variation around principal components of the GLO latent space on the CelebA 128 × 128 dataset. The original image is in the middle and we move along a eigenvector in both directions. We illustrate this process with the first 2 components as well as some later ones. Several works have used an optimization of a latent representation for the express purpose of generating realistic images, e.g. BID34 BID32. In these works, the total loss function optimized to generate is trained separately from the optimization of the latent representation (in the former, the loss is based on a complex wavelet transform, and in the latter, on separately trained autoencoders and classification convolutional networks). In this work we train the latent representations and the generator together from scratch; and show that at test time we may sample new latent z either with simple parametric distributions or by interpolation in the latent space. Learning representations. Arguably, the problem of learning representations from data in an unsupervised manner is one of the long-standing problems in machine learning BID2 BID25. One of the earliest algorithms used to achieve is goal is Principal Component Analysis, or PCA BID33 BID19. For instance, PCA has been used to learn low-dimensional representations of human faces BID41, or to produce a hierarchy of features BID8. The nonlinear extension of PCA is an autoencoder BID1, which is in turn one of the most extended algorithms to learn low-dimensional representations from data. Similar algorithms learn low-dimensional representations of data with certain structure. For instance, in sparse coding BID0 BID30, the representation of one image is the linear combination of a very few elements from a dictionary of features. More recently, BID46 realized the capability of deep neural networks to map large collections of images to noise vectors, and exploited a similar procedure to learn visual features unsupervisedly. Similarly to us, BID3 allow the noise vectors z to move in order to better learn the mapping from images to noise vectors. The proposed GLO is the analogous to these works, in the opposite direction: learn a map from noise vectors to images. Finally, the idea of mapping between images and noise to learn generative models is a well known technique BID9 BID23 BID39 BID5.Nuisance Variables One might consider the generator parameters the variables of interest, and Z to be "nuisance variables". There is a classical literature on dealing with nuisance parameters while estimating the parameters of interest, including optimization methods as we have used BID40. In this framing, it may be better to marginalize over the nuisance variables, but for the models and data we use this is intractable. PCA VAE with Lap 1 GLO with Lap 1 Figure 9: Illustration of feature arithmetic on CelebA. We show that by taking the average hidden representation of row a, substracting the one of row b and adding the one of row c, we obtain a coherent image. We show such interpolations with PCA, VAE and GLO.Speech generation Optimizing a latent representation of a generative model has a long history in speech BID35, both for fitting single examples in the context of fitting a generative model, and in the context of speaker adaptation. The experimental presented in this work suggest that, in the image domain, we can recover many of the properties of GAN models by using convnets trained with simple reconstruction losses. While this does not invalidate the promise of GANs as generic models of uncertainty or as methods for building generative models, our suggest that, in order to more fully test the adversarial construction, research needs to move beyond images and convnets. On the other hand, practitioners who care only about generating images for a particular application, and find that the parameterized discriminator does improve their can use reconstruction losses in their model searches, alleviating some of the instability of GAN training. While the visual quality of the are promising, especially on the CelebA dataset, they are not yet to the level of the obtained by GANs on the LSUN bedrooms. This suggest several research directions: one possibility, suggested by 3, is that being able to cover the entire dataset is too onerous a task if all that is required is to generate a few nice samples. In that figure we see that GANs have trouble reconstructing randomly chosen images at the same level of fidelity as their generations. However, GANs can produce good images after a single pass through the data with SGD. In future work we hope to better understand the tension between these two observations. There are many possibilities for improving the quality of GLO samples beyond understanding the effects of coverage. For example other loss functions (e.g. a VGG metric, as in BID32), model architectures (here we stayed close to DCGAN for ease of comparison), and more sophisticated sampling methods after training the model all may improve the visual quality of the samples. There is also much work to be done in adding structure to the Z space. Because the methods here keep track of the correspondence between samples and their representatives, and because the Z space is free, we hope to be able to organize the Z in interesting ways as we train.
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryj38zWRb
Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN.
In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN. Different from common forests with deterministic routings, a probabilistic routing variant is used in our innovated random-forest kernel, which is possible to merge with the CNN frameworks. Our proposed random-forest kernel has the following advantages: From the perspective of random forest, the output of GAN discriminator can be viewed as feature inputs to the forest, where each tree gets access to merely a fraction of the features, and thus the entire forest benefits from ensemble learning. In the aspect of kernel method, random-forest kernel is proved to be characteristic, and therefore suitable for the MMD structure. Besides, being an asymmetric kernel, our random-forest kernel is much more flexible, in terms of capturing the differences between distributions. Sharing the advantages of CNN, kernel method, and ensemble learning, our random-forest kernel based MMD GAN obtains desirable empirical performances on CIFAR-10, CelebA and LSUN bedroom data sets. Furthermore, for the sake of completeness, we also put forward comprehensive theoretical analysis to support our experimental . Generative adversarial nets (GANs;) are well-known generative models, which largely attribute to the sophisticated design of a generator and a discriminator which are trained jointly in an adversarial fashion. Nowadays GANs are intensely used in a variety of practical tasks, such as image-to-image translation ; 3D reconstruction ; video prediction ; text-to-image generation ; just to name a few. However, it's well-known that the training of GANs is a little tricky, see e.g. . One reason of instability of GAN training lies in the distance used in discriminator to measure the divergence between the generated distribution and the target distribution. For instance, concerning with the Jensen-Shannon divergence based GANs proposed in , points out that if the generated distribution and the target distribution are supported on manifolds where the measure of intersection is zero, Jensen-Shannon divergence will be constant and the KL divergences be infinite. Consequently, the generator fails to obtain enough useful gradient to update, which undermines GAN training. Moreover, two non-overlapping distributions may be judged to be quite different by the Jensen-Shannon divergence, even if they are nearby with high probability. As a , to better measure the difference between two distributions, Integral Probability Metrics (IPM) based GANs have been proposed. For instance, utilizes Wasserstein distance in GAN discriminator, while adopts maximum mean discrepancy (MMD), managing to project and discriminate data in reproducing kernel Hilbert space (RKHS). To mention, the RKHS with characteristic kernels including Gaussian RBF kernel and rational quadratic kernel (Bińkowski et al., 2018) has strong power in the discrimination of two distributions, see e.g. . In this paper, inspired by non-linear discriminating power of decision forests, we propose a new type of kernel named random-forest kernel to improve the performance of MMD GAN discriminator. In order to fit with back-propagation training procedure, we borrow the decision forest model with stochastic and differentiable decision trees from in our random-forest kernel. To be specific, each dimension of the GAN discriminator outputs is randomly connected to one internal node of a soft decision forest, serving as the candidate to-be-split dimension. Then, the tree is split with a soft decision function through a probabilistic routing. Other than the typical decision forest used in classification tasks where the value of each leaf node is a label, the leaf value of our random forest is the probability of a sample x i falling into a certain leaf node of the forest. If the output of the discriminator is denoted as h θ N (x i) and the probability output of the t-th tree is denoted as µ t (h θ N (x i); θ F ), the random forest kernel k RF can be formulated as where T is the total number of trees in the forest, θ N and θ F denote the parameters of the GAN discriminator and the random forest respectively. Recall that random forest and deep neural networks are first combined in , where differentiable decision tree model and deep convolutional networks are trained together in an end-to-end manner to solve classification tasks. extends the idea to label distribution learning, and makes further extensions in regression regime. , and also introduce deep decision forests. Apart from the typical ensemble method that averages the across trees, they aggregate the by multiplication. As for the combination of random forest and introduce forests structure in GAN discriminator, combining CNN network and forest as a composited classifier, while uses forest structure as one of non-linear mapping functions in regularization part. On the other hand, in the aspect of relationship between random forest and kernel method, initiates the literature concerning the link. He shows the fact that a purely random tree partition is equivalent to a kernel acting on the true margin, of which form can be viewed as the probability of two samples falling into the same terminal node. proves that random forest kernel is characteristic. Some more theoretical analysis can be found in , ,. However, despite their theoretical breakthroughs, forest decision functions used in these forest kernels are non-differentiable hard margins rather than differentiable soft ones, and thus cannot be directly used in back propagation regime. To the best of our knowledge, MMD GAN with our proposed random-forest kernel is the first to combine random forest with deep neural network in the form of kernel MMD GAN. Through theoretical analysis and numerical experiments, we evaluate the effectiveness of MMD GAN with our random-forest kernel. From the theoretical point of view, our random-forest kernel enjoys the property of being characteristic, and the gradient estimators used in the training process of random-forest kernel GAN are unbiased. In numerical experiments, we evaluate our random-forest kernel under the setting of both the original MMD GAN and the one with repulsive loss . Besides, we also compare our random-forest kernel with Gaussian RBF kernel , rational quadratic kernel (Bińkowski et al., 2018), and bounded RBF kernel . As a , MMD GAN with our random-forest kernel outperforms its counterparts with respect to both accuracy and training stability. This paper is organized as follows. First of all, we introduce some preliminaries of MMD GAN in Section 2. Then we review the concept of deep random forest and show how it is embedded within a CNN in 3.1. After that, random-forest kernels and MMD GAN with random-forest kernels are proposed in 3.2 and 3.3 respectively. Besides, the training techniques of MMD GAN with random-forest kernel are demonstrated in Section 3.4 and the theoretical are shown in Section 3.5. Eventually, Section 4 presents the experimental setups and , including the comparison between our proposed random-forest kernel and other kernels. In addition, all detailed theoretical proofs are included in the Appendices. The generative model captures the data distribution P X, by building a mapping function G: Z → X from a prior noise distribution P Z to data space. While the discriminative model D: X → R is used to distinguish generated distribution P Y from real data distribution P X. Taking X, X ∼ P X and Y, Y ∼ P Y:= P G (Z) where Y:= G(Z) and Y:= G(Z), the squared MMD is expressed as The loss of generator and discriminator in MMD GAN proposed in is: proposed MMD GAN with repulsive loss, where the objective functions for G and D are: we can write an unbiased estimator of the squared MMD in terms of k as When k is a characteristic kernel, we have MMD 2 [P X, P Y] ≥ 0 with equality applies if and only if P X = P Y. The best-known characteristic kernels are gaussian RBF kernel and rational quadratic kernel (Bińkowski et al., 2018). In this section, we review a stochastic and differentiable variant of random forest and how it is embedded within a deep convolutional neural network proposed in. Then we propose random-forest kernel and we apply it in MMD GAN. We illustrate the advantages of our random-forest kernel, show the training technique of MMD GAN with random-forest kernel, and study its theoretical properties. Suppose that a random forest consists of T ∈ N random trees. For the t-th tree in the forest, t ∈ {1, . . ., T}, we denote N t:= {d t j} |Nt| j=1 as the set of its internal nodes and if T trees have the same structure, then we have |N t | = |N |, see Figure 1. Furthermore, we denote L t as the set of its leaf nodes and θ t F as the parameters of the t-th tree. Here we introduce the routing function µ t (x; θ t F) which indicates the probability of the sample x falling into the -th leaf node of the t-th tree. In order to provide an explicit form for the routing function µ t (x; θ t F), e.g. the thick black line in Figure 1, we introduce the following binary relations that depend on the tree structure: d t j is true, if belongs to the left subtree of node d ) be the decision function of the j-th internal node in the t-th tree, that is the probability of the sample x falling into the left child of node d t j in the t-th tree. Then, µ t can be expressed by these relations as where R t denotes the unique path from node 1 to node of the t-th tree. Figure 1: Example of a random forest with T trees: blue nodes denote the internal nodes N:= {d1, ..., d7} while purple nodes are the leaf nodes L:= {1, · · ·, 8}. The black thick path illustrates the route of a sample x falling into the 6 leaf node of the t-th tree. Now, let us derive the explicit form of the decision function p t j (x; θ t F). Here, to utilize the power of deep learning, we consider using the convolutional neural network to construct decision functions of random forest. To be specific, given the parameter θ N trained from a CNN network, we denote h(·; θ N) as the d -dimension output of a convolutional neural network, which is the unit of the last fullyconnected layer in the CNN, and h i (·; θ N) is the i-th element of the CNN output. We denote C: {1, . . ., T |N |} → {1, . . ., d} as the connection function, which represents the connection between the internal node d t j and the former CNN output h i. Note that during the whole training process, the form of the connection function C is not changed and every internal node d t j is randomly assigned to an element h C(T (t−1)+j) (·; θ). If we choose the sigmoid function σ(x) = (1+e −x) −1 as the decision function, and let the parameters of the t-th tree be θ For example, we have the probability p Every leaf node in each tree has a unique road R t from node 1 to node with length |R t | = log 2 (|L t |). Then, for the every leaf node of the t-th tree, we have where Lf denotes the set of all left son nodes of its father node. Here, we propose the random-forest kernel as follows: Definition 1 (Random-Forest Kernel) Let x, y be a pair of kernel input, let θ t F = (w t, b t) denotes the weights and bias of the t-th tree of the random forest, and θ F:= (θ t F) T t=1. The random-forest kernel can be defined as where L t denotes the set of leaf nodes in the t-th tree, We write and introduce the objective functions of MMD GAN with random-forest kernel by where y = G ψ (z), z is noise vector, and R is the regularizer of random-forest kernel (the detail is shown in Section 3.4). In addition, the objective functions of MMD GAN with repulsive loss are Random-forest kernel MMD GAN enjoys the following advantages: • Our proposed random-forest kernel used in MMD GAN benefits from ensemble learning. From the perspective of random forest, the output of MMD GAN discriminator h(·; θ N) can be viewed as feature inputs to the forest. To mention, each tree only gets access to merely a fraction of the features by random connection functions, and thus the entire forest benefits from ensemble learning. • Our random-forest kernel MMD GAN enjoys the advantages of three powerful discriminative methods, which are CNN, kernel method, and ensemble learning. To be specific, CNN is good at extracting useful features from images; Kernel method utilize RKHS for discrimination; Ensemble learning utilizes the power of randomness and ensemble. • Our proposed random-forest kernel has some good theoretical properties. In one aspect, random-forest kernel is proved to be characteristic in. In another, in Section 3.5, the unbiasedness of the gradients of MMD GAN with random-forest kernel is proved. , the authors mention that the tree may get stuck on plateaus if internal nodes always assign the most of probability to one of its subtree. The gradients will vanish because the gradients of the logistic-type decision function will be very closed to zero. In order to stabilize the training of random-forest kernel and avoid the stuck on bad solutions, we add penalty that encourage each internal node to split in a balanced style as does, that is, we penalize the cross entropy between the desired 0.5, 0.5 average probabilities of falling into two subtrees and the actual average probability α, 1 − α. The actual average probability of the i-th internal node α i is, where P i (x) is the routing probability of x from root node to internal node i, p i (x) is the probability of x falling into the left subtree of the i-th internal node, and Ω is the collection of mini-batch samples. Then, the formulation of the regularizer is: where λ is exponentially decayed with the depth of d of the internal node by multiplying the coefficient 2 −d, for the intuition that less balanced split in deeper internal node may increase the non-linear discrimination power. When training random-forest kernel, a mini-batch of real samples X and generated pictures Y are both fed into the discriminator, and then k(X, X), k(X, Y) and k(Y, Y) are calculated, where k:= k RF • h(·; θ N). Here, to notify, we find that the Ω in the regularizer formulation does matter in forest-kernel setting. It's better to calculate α i and R(Ω) in the case of Ω = X, Ω = Y, Ω = X ∪ Y respectively, and then sum up three parts of regularizer as final regularizer R. Therefore, the formulation of regularizer R added in the training of random-forest kernel is In this subsection, we present our main theoretical . Theorem 2 (Unbiasedness) Let X be the true data on X with the distribution P X and Z be the noise on Z with the distribution P Z satisfying E P X X α < ∞ and E P Z Z α < ∞ for some α ≥ 1. Moreover, let G ψ: Z → X be a generator network, h θ N: X → R d be a discriminator network, k RF be the random-forest kernel, and θ D:= (θ N, θ F) be the parameters of the GAN discriminator. Then, for µ-almost all θ D ∈ R |θ D | and ψ ∈ R |ψ|, there holds In other words, during the training process of Random-Forest Kernel MMD GAN, the estimated gradients of MMD with respect to the parameters ψ and θ D are unbiased, that is, the expectation and the differential are exchangeable. In this section, we evaluate our proposed random-forest kernel in the setting of MMD GAN in and the MMD GAN with repulsive loss . To illustrate the efficacy of our random-forest kernel, we compare our random-forest kernel with Gaussian kernel , rational quadratic kernel (Bińkowski et al., 2018) As is shown in Figure 3, the shapes of the Gaussian RBF kernel and the rational quadratic kernel are both symmetric. However, the local structure of random-forest kernel (w.r.t reference points except 70-dimensional zero vector) is asymmetric and very complex. The asymmetry and complexity of random-forest kernel may be helpful to discriminate two distributions in MMD GAN training. For dataset Cifar10 and dataset LSUN bedroom, DCGAN architecture with hyper parameters from is used for both generator and discriminator; and for dataset CelebA, we use a 5-layer DCGAN discriminator and a 10-layer ResNet generator. Further details of the network architecture are given in Appendix A.3. We mention that in all experiments, batch normalization is used in the generator and spectral normalization is used in the discriminator. The hyper-parameter details of kernels used in the discriminator are shown in Appendix A.1. For the sake of comparison with forest kernel, the dimension of discriminator output layer s is set to be 70 for random-forest kernel and to be 16 for other kernels following the previous setting of Bińkowski et al.;. We set the initial learning rate 10 −4 and decrease the learning rate by coefficient 0.8 in iteration 30000, 60000, 90000, and 120000. Adam optimizer is used with momentum parameters β 1 = 0.5 and β 2 = 0.999. The batch size of each model is 64. All models were trained for 150000 iterations on CIFAR-10, CelebA, and LSUN bedroom datasets, with five discriminator updates per generator update. The following three metrics are used for quantitative evaluation: Inception score (IS) , Fréchet inception distance (FID) , and Kernel inception distance (KID) (Bińkowski et al., 2018). In general, higher IS and Lower FID, KID means better quality. However, outside the dataset Imagenet, the metric IS has some problem, especially for datasets celebA and LSUN bedroom. Therefore, for inception score, we only report the inception score of CIFAR-10. Quantitative scores are calculated based on 50000 generator samples and 50000 real samples. We compare our proposed random-forest kernel with mix-rbf kernel and mix-rq kernel in the setting of the MMD GAN loss, and compare our proposed random-forest kernel with rbf-b kernel in the setting with MMD GAN repulsive loss. The Inception Score, the Fréchet Inception Distance and the Kernel Inception Distance of applying different kernels and different loss functions on three benchmark datasets are shown in table 1. We find that, in the perspective of the original MMD GAN loss, our newly proposed random-forest kernel shows better performance than the mix-rbf kernel and the mix-rq kernel in CIFAR-10 dataset and LSUN bedroom dataset; and in the perspective of the repulsive loss, the performance of our newly proposed random-forest kernel is comparable or better than the rbf-b kernel. The efficacy of our newly proposed random-forest kernel is shown under the setting of both MMD GAN loss and MMD GAN repulsive loss. Some randomly generated pictures of model learned with various kernels and two different loss functions are visualized in Appendix D. The formulation of our proposed random-forest kernel is In experiments, we take the number of trees in the forest T = 10 and the depth of trees dep = 3. Thus, the total number of internal nodes is 70. To notify, in general, the parameters are trainable, where and N is the set of every internal nodes of a tree. However, for the sake of experimental simplification, we fix each w where Σ = {2, 5, 10, 20, 40, 80}, and rational quadratic kernel (Bińkowski et al., 2018) with mixture of kernel scale α, that is, where A = {0.2, 0.5, 1, 2, 5}. In the setting of the MMD GAN with repulsive loss proposed in , we compare our forest kernel with bounded RBF kernel , that is, with σ = 1, b l = 0.25 and b u = 4. In figure 3, we compare the contours of three different kernels (the detail of kernels is shown in A.1). We directly plot the filled contours of 2-dimensional Gaussian kernel and rational quadratic kernel with reference to; As for random-forest kernel with T = 10 and dep = 3, where the input dimension is 70, the steps of a 2-dimensional visualization are as follows: 1) We randomly generate 25000 points from the uniform distribution U[−1, 1] 70 and set (0.5, . . ., 0.5) ∈ R 70 as the reference point. To notify, if the reference point is 70-dimensional zero vector, the values of random-forest kernel will be constant; 2) We calculate the 25000 output values of random-forest kernel; 3) We transform 70-dimensional 25000 randomly generated points and reference point together by t-SNE to 2-dimensional points; 4) We show the filled contour of the neighborhood of transformed reference point. We try to visualize the local structure of random-forest kernel. In the experiments of the CIFAR-10 and LSUN bedroom datasets, we use the DCGAN architecture following , and for the experiments of the CelebA dataset, we use a 5-layer DCGAN discriminator and a 10-layer ResNet generator as in Bińkowski et al.. The first few layers of the ResNet generator consist of a linear layer and 4 residual blocks as in. The network architecture details are shown in Table 2 and 3. Table 2: Network architecture used in the image generation on CIFAR-10 dataset and LSUN bedroom dataset. In terms of the shape parameter h and w in the table, we take h = w = 4 for CIFAR-10 and h = w = 8 for LSUN bedroom. As for the output dimension of discriminator s, we take s = 70 for random-forest kernel and s = 16, 70 for other kernels. To notify, 16 is the setting in Bińkowski et al. In Section B, we will show the main propositions used to prove the Theorem 2. To be specific, in Section B.1, we represent neural networks as computation graphs. In Section B.2, we consider a general class of piecewise analytic functions as the non-linear part of neural networks. In Section B.3, we prove the Lipschitz property of the whole discriminators. In Section B.4, we discover that for P X almost surely, the network is not differential for its parameters θ N and ψ. Fortunately, we prove that the measure of bad parameters set is zero. In Section C, we will show the explicit proof of main propositions in Section B and Theorem 2. Historical attempts to scale up GANs using CNNs to model images have been unsuccessful. The original CNN architecture is made up of convolution, non-linear and pooling. Now for our model, we adopt the deconvolution net to generate the new data with spatial upsampling. Moreover, batch normalization is a regular method which stabilizes learning by normalizing the input to each unit to have zero mean and unit variance. Furthermore, relu functions are used both in generator and discriminator networks as non-linear part. Here we avoid spatial pooling such as max-pooling and global average pooling. Throughout this paper, we always denote by the output of a fully connected layer, where d represents the number of neurons in the output. The general feed-forward networks including CNN and FC can be formulated as a directed acyclic computation graph G consisting of L + 1 layers, with a root node i = 0 and a leaf node i = L. For a fixed node i, we use the following notations: π(i): the set of parent nodes of i; j < i: j is a parent node of i. Each node i > 0 corresponds to a function f i that receives a R d π(i) -valued input vector, which is the concatenation of the outputs of each layer in π(i), and outputs a R di -valued vector, where According to the construction of the graph G, the feed-forward network that factorizes with functions f i recursively can therefore be defined by h 0 = X, and for all 0 < i ≤ L, where h π(i) is the concatenation of the vectors h j, j ∈ π(i). Here, the functions f i can be of the following different types: ) is a linear operator on the weights W i, e.g. convolutions, then the functions f i are of the linear form (ii) Non-linear: Such functions f i including ReLU, max-pooling and ELU, have no learnable weights, can potentially be non-differentiable, and only satisfy some weak regularization conditions, see Definition 3 and the related Examples 2, 3, and 4. In the following, we denote by I the set of nodes i such that f i is non-linear, and its complement by I c:= {1, . . ., L} \ I, that is, the set of all of all linear modules. We write θ N as the concatenation of parameters where |θ N | denotes the total number of parameters of θ N. Moreover, the feature vector of the network corresponds to the output neurons G L of the last layer L and will be denoted by where the subscript θ stands for the parameters of the network. If X is random, we will use h θ N (X) to denote explicit dependence on X, and otherwise when X is fixed, it will be omitted. Throughout this paper, in the construction of neural networks, we only consider activation functions that are piecewise analytic which can be defined as follows: Definition 3 (Piecewise Analytic Functions) Let {f i} i∈I be non-linear layers in neural networks, then f i is said to be an piecewise analytic function if there exists a partition of R d π(i) with J i pieces, that is, The sets D The following examples show that in practice, the vast majority of deep networks satisfy the conditions in the above definition, and therefore are piecewise analytic. Example 1 (Sigmoid) Let f i outputs the sigmoid activation function on the inputs. Then we need no partition, and hence there exist corresponding to the whole space; (ii) J i = 1 real analytic function f Here, we mention that the case in Example 1 corresponds to most of the differentiable activation functions used in deep learning, more examples include the softmax, hyperbolic tangent, and batch normalization functions. On the other hand, in the case that the functions are not differentiable, as is shown in the following examples, many commonly used activation functions including the ReLU activation function are at least piecewise analytic. Besides the ReLU activation function, other activation functions, such as the Max-Pooling and the ELU also satisfy the form of Definition 3 are therefore piecewise analytic. In this section, we investigate the Lipschitz property of the proposed discriminative networks, which is formulated as follows: Proposition 4 Let θ D be the parameters of discriminators and B r (θ D) ⊂ R |θ D | be the ball with center θ D and radius r ∈ (0, ∞). Then, for all θ D ∈ B r (θ D) and all x ∈ R d, there exists a regular function c(x) with In the analysis of section B.3, we intentionally ignore the situation when samples fall into the "bad sets", where the network is not differentiable for data sets x with nonzero measure. And in proposition 5, we show that the measure of the so called "bad sets" is zero. To better illustrate this proposition, we first denote some notations as follows: as the set of input vectors x ∈ R d such that h θ N (x) is not differentiable with respect to θ N at the point θ N,0. Then we call the set of critical parameters, where the network is not differentiable for data sets x with nonzero measure. Proposition 5 Let the set Θ P X be as in equation 8. Then, for any distribution P X, we have µ(Θ P X) = 0. To prove Proposition 4, we first introduce Lemma 6 and Lemma 7. Lemma 6 describes the growth and Lipschitz properties of general networks including convolutional neural networks and fully connected neural networks introduced in Section B.1. Lemma 6 Let h θ N be the neural networks defined as in Section B.1. Then there exist continuous functions a, b: R |θ N | → R and α, β: Proof [of Lemma 6] We proceed the proof by induction on the nodes of the network. Obviously, for i = 0, the inequalities hold with b 0 = 0, a 0 = 1, β 0 = 0 and α 0 = 0. Then, for the induction step, let us fix an index i. Assume that for all x ∈ R d and all where are some continuous functions. Moreover, concerning with the Lipschitz property, we have where we used the notations (ii) If i ∈ I c, that is, i is not a linear layer, here we only consider the sigmoid and ReLU functions. We first show that both of them are Lipschitz continuous. Concerning the sigmoid function, σ(x) = 1 1 + e −x we obviously have for all x ∈ R. Consequently, the sigmoid function is Lipschitz continuous with Lipschitz constant |σ| 1:= 1/4. Next, for the ReLU function, we have for all x ∈ R, Therefore, the ReLU function is Lipschitz continuous with Lipschitz constant |σ| 1:= 1. Thus, non-linear layer f i is Lipschitz continuous with Lipschitz constant M:= |f i | 1. Consequently, by recursion, we obtain continuous functions Next, we investigate the growth conditions and Lipschitz property of the random forest. For the ease of notations, we write the function µ (T) as Moreover, it is easily seen that T · |N | equals the number of internal nodes in the random forest. Lemma 7 Let h(x; θ N) be the input vector of random trees and θ F:= (w, b) where w and b denote the weights and bias of the random forests, respectively. Then, for all h ∈ R d and θ F, θ F ∈ R 2T |N |, there exist continuous functions c 1, c 2, and constants c 3, c 4, c 5 such that µ ≤ 1 Taking the summation on both sides of the above inequality with respect to all of the nodes in the random forest, that is, w.r.t. ∈ L, we obtain where the second inequality follows from the Cauchy-Schwartz inequality and the third inequality is due to the fact that the number that the nodes p i in the random forest assigned to the corresponding node h j equals T |N |/d or T |N |/d + 1, which are less than |L|/d. Now, we show the Lipschitz properties equation 12 and equation 13 of the random forest. From the equivalent form equation 3 concerning the value of the leaf node, we easily see that µ can be written as a product of probability functions p i or 1 − p i. Therefore, without loss of generality, we can assume µ are of the product form For a fixed t = 1,..., T, recall that T t denotes the connection function of the t-th random tree. Then, the Lipschitz property of the sigmoid function and the continuously differentiability of the linear transform yield Then, equation 14 together with equation 15 implies where the last inequality follows from the Cauchy-Schwartz inequality. Consequently, concerning the random forest, we obtain where the second inequality is again due to then Cauchy-Schwartz inequality. Analogously, for a fixed t = 1,..., T and any i, we have and consequently we obtain and for the random forest, there holds which completes the proof. The next proposition presents the growth condition and the Lipschitz property of the composition of the neural network and the random forest. Lemma 8 Let B r (θ D) ⊂ R |θ D | be the ball with center θ D and radius r ∈ (0, ∞). Then, for all θ D ∈ B r (θ D), all x ∈ R d, there exist continuous functions c 6, c 7, c 8, and c 9 such that Proof [of Lemma 8] Combining equation 9 in Lemma 6 with equation 11 in Lemma 7, we obtain the growth condition of the form Concerning the Lipschitz property, using equation 12 and equation 13, we get where the last inequality follows from equation 9 and equation 10 established in Proposition 6. With the concatenation θ D:= (θ N, θ F):= (θ N, w, b) we obtain and thus the assertion is proved. Proof [of Proposition 4] First we give the growth conditions for the linear kernel. Let k be the linear kernel k L (x, y):= x, y, then we have If we denote u:= (x, y), then the above linear kernel k L (x, y) can be written a as a univariate function Therefore we have K is continuously differentiable satisfying the following growth conditions: Let us define the function f: Then we have f = K(v) and f = K(u). Moreover, f is differentiable with derivative The growth condition equation 18 implies Using the mean value theorem, we obtain that for all u, v ∈ R L, there holds Proposition 8 tells us that holds for some continuous functions c 6, c 7, c 8, and c 9, which are also bounded by certain constant B > 0 on the ball B r (θ D). Some elementary algebra shows that Since t → (1+t 1/2) 2 is concave on t ≥ 0, Jensen's inequality together with the moment assumption E x 2 < ∞ implies and also E(1 + x) < ∞ by the moment assumption. Therefore, the regular function c(x) defined by c(is integrable and thus our assertion is proved. To reach the proof of Proposition 5, we first introduce Lemma 9 and Lemma 11. Let i be a fixed node. To describe paths through the network's computational graph, we need to introduce the following notations: A(i):= {i | i is an ancestor of i}; Obviously, we always have ∂i ⊆ ¬i and ¬i = ∂i ∪ ¬π(i). We define a backward trajectory starting from node i by The set of all backward trajectories for node i will be denoted by Q(i). Lemma 9 Let i be a fixed node in the network graph. If θ N ∈ R |θ N | \ ∂S ¬i, then there exist a constant η > 0 and a trajectory q ∈ Q(i) such that for all θ N ∈ B η (θ N), there holds where f q is the real analytic function on R |θ N | with the same structure as h θ N, only replacing each nonlinear f i with the analytic function f Proof [of Lemma 9] We proceed by induction on the nodes of the network. If i = 0, we obviously have h 0 θ N = x, which is real analytic on R |θ N |. For the induction step we assume that the assertion holds for ¬π(i) and let θ N ∈ R |θ N | \ ∂S ¬i. Then there exist a constant η > 0 and a trajectory q ∈ Q(π(i)) such that for all θ N ∈ B η (θ N), there holds with f q: R |θ N | → R being a real analytic function. (i) If θ N ∈ S ∂i, then there exists a sufficiently small constant η > 0 such that B η (θ N) ∩ S ∂i = ∅. Therefore, there exists a j ∈ {1, . . ., J i} such that for all θ N ∈ B η (θ N), there holds where f j i is one of the real analytic functions in the definition of f i. Then, equation 19 implies that for all θ N ∈ B min{η,η} (θ N), there holds (ii) Consider the case θ N ∈ S ∂i. By assumption, we have θ N ∈ ∂S ∂i. Then there exists a small enough constant η > 0 such that B η (θ N) ⊂ S ∂i. If we denote Now we show by contradiction that for η small enough, there holds To this end, we assume that there exists a sequence of parameter and index pairs (θ N,n, p n) such that p n ∈ A c, θ N,n ∈ S pn, and θ N,n → θ N as n → ∞. Since A c is finite, there exists a constant subsequence {p ni} ⊂ {p n} and some constant p 0 ∈ A c with p ni = p 0 for all i. Then the continuity of the network and g p0 imply that S p0 is a closed set and consequently we obtain θ N ∈ S p0 by taking the limit, which contradicts the fact that θ N ∈ p∈A c S p. Therefore, for η small enough, there holds B η (θ N) ⊆ p∈A S p, which contradicts the assumption θ N ∈ ∂S ∂i. Consequently, there exists a j ∈ {1, . . ., J i} satisfying equation 20. By setting, where ⊕ denotes concatenation, then for for all θ N ∈ B min{η,η} (θ N), there holds and the assertion is proved. Now, for a fixed p = (i, j, s) ∈ P, we denote the set of network parameters θ N that lie on the boundary of p by where the functions g j i,s are as in Definition 3. As usual, the boundary of the set S p is denoted by ∂S p and the set of the boundaries is denoted by Finally, for the ease of notations, if P ⊂ P, we write Obviously, we have θ N ∈ R m \ ∂S ¬π(i). To prove Lemma 11, we need the following lemma which follows directly from and hence we omit the proof. Lemma 10 Let θ N → F (θ N): R |θ N | → R be a real analytic function and define Then we have either µ(M) = 0 or F = 0. Lemma 11 Let the set of boundaries ∂S P be as in equation 21. Then we have Proof [of Lemma 11] We proceed the proof by induction. For i = 0, we obviously have ∂S ¬0 = ∅ and therefore µ(∂S ¬0) = 0. For the induction step let us assume that For s = (p, q), the pair of an index p ∈ ∂i and a trajectory q ∈ Q(i), we define where the analytic function f q is defined as in Proposition 9. Then we prove by contradiction that for any θ N ∈ ∂S ∂i \ ∂S ¬π(i), there exists an s ∈ ∂i × Q(i) such that θ N ∈ M s and µ(M s) = 0. To this end, let θ N ∈ ∂S ∂i \ ∂S ¬π(i), then for small enough η > 0, there holds. By Proposition 9, there exists a trajectory q ∈ Q(π(i)) such that for all θ N ∈ B η (θ N), there holds Moreover, since θ N ∈ ∂S ∂i, there exists an index p ∈ ∂i such that g p (h This means that for s = (p, q), we have θ N ∈ M s. Therefore, we have where Combing equation 24 and equation 25, we obtain By the assumption µ(∂S ¬π(i) ) = 0, we conclude that µ(∂S ¬i) = 0. Since for the last node L, we have ¬L = P and therefore µ(∂S P) = 0. Note that the random forest µ (T) (·) can be considered as the composition of affine transformations and sigmoid functions and hence are always continuously differentiable with respect to θ F, we only need to investigate whether the neural network h θ N (x) is differentiable with respect to θ N. For a fixed x ∈ R d, we write as the set of parameters for which the network is not differentiable. Lemma 12 Let the set Θ x be as in equation 26. Then, for any x ∈ R d, we have Proof [of Lemma 12] Let the boundary set ∂S P be defined as in equation 22 and θ N,0 ∈ Θ x. Obviously, we have θ 0 ∈ S P. We proceed the proof of the inclusion Θ x ⊆ ∂S P by contradiction. To this end, we assume that θ D,0 ∈ ∂S P. When Lemma 9 applied to the output layer, there exist some η > 0 and a real analytic function f (θ N) such that for all θ N ∈ B η (θ N,0), there holds Consequently, the network is differentiable at θ N,0, contradicting the fact that θ N,0 ∈ Θ x. Therefore, we have Θ x ⊆ ∂S P and hence µ(Θ x) = 0 since µ(∂S P) = 0 according to Lemma 11. Proof [of Proposition 5] Let the sets N (θ N), Θ P X, and Θ x be as in equation 7, equation 8, and equation 26, respectively. Consider the sets S 1 = {(θ N, x) ∈ R |θ N | × R d | θ N ∈ Θ P X and x ∈ N (θ N)}, Since the network is continuous and not differentiable, Theorem I in implies that the sets S 1 and S 2 are measurable. Obviously, we have S 1 ⊂ S 2 and therefore we obtain ν(S 1) ≤ ν(S 2), where ν:= P X ⊗ µ. On the one hand, Fubini's theorem implies By Lemma 12, we have µ(Θ x) = 0 and therefore ν(S 2) = 0 and hence ν(S 1) = 0. On the other hand, Fubini's theorem again yields By the definition of Θ P X we have P X (N (θ N)) > 0 for all θ N ∈ Θ P X. Therefore, ν(S 1) = 0 implies that µ(Θ P X) = 0. C.3 PROOFS OF SECTION 3.5 To prove Theorem 2, we also need the following lemma. Lemma 13 Let (θ n) n∈N be a sequence in R m converging towards θ 0, i.e., θ n = θ 0 as n → ∞. Moreover, let f: R m → R be a function and g be a vector in R m. If holds for all sequences (θ n) n∈N, then f is differentiable at θ 0 with differential g. Proof [of Lemma 13] The definition of a differential tell us that g is the differential of f at θ 0, if By the sequential characterization of limits, we immediately obtain the assertion. Proof [of Theorem 2] Consider the following augmented networks: (θ D,ψ) (Z, Z) = (µ Without loss of generality, in the following, we only consider the network h (θ D,ψ) (X, Z) with inputs from P X ⊗ P Z, which satisfies E P X ⊗P Z (X, Z) 2 < ∞. By the expression of definition 1, we have where we denote µ (T) θ D (x):= µ (T) (h θ N (x); θ F ). Due to the linear kernel k L (x, y):= x, y, there holds )) is differentiable at λ 0 for P X -almost all x and P Z -almost all z. Then, according to Proposition 5, this statement holds for µ-almost all θ D ∈ R |θ D | and all ψ ∈ R |ψ|. Consider a sequence (λ n) n∈N that converges to λ 0, then there exists a δ > 0 such that λ n − λ 0 < δ for all n ∈ N. For a fixed u = (x, z) ∈ R d × R |z|, Proposition 4 states that there exists a regular function c(u) with E P X c(X) < ∞ such that |K(h λn (u)) − K(h λ0 (u))| ≤ c(u) λ n − λ 0 and consequently we have |∂ λ K(h λ0 (u))| ≤ c(u) for P X ⊗ P Z -almost all u ∈ R |u|. For n ∈ N, we define the sequence g n (x) by g n (u) = |K(h λn (u)) − K(h λ0 (u)) − ∂ λ K(h λ0 (u))(λ n − λ 0)| λ n − λ 0. Obviously, the sequence g n (x) converges pointwise to 0 and is bounded by the integrable function 2c(u). By the dominated convergence theorem, see e.g., Theorem 2.24 in , we have Moreover, for n ∈ N, we define the sequenceg n (x) bỹ Clearly, the sequenceg n (x) is upper bounded by E P X ⊗P Z g n (u) and therefore converges to 0. By Lemma 13, E P X ⊗P Z [K(h λ (u))] is differentiable at λ 0 with differential Since similar as above hold also for the networks h and h, and Lemma 6 in states that MMD 2 u is unbiased, our assertion follows then from the linearity of the form of MMD D SAMPLES OF GENERATED PICTURES Generated samples on the datasets CIFAR-10, CelebA, and LSUN bedroom are shown in Figure 4, 5, and 6, respectively.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJxhWa4KDr
Equip MMD GANs with a new random-forest kernel.
Reinforcement learning in an actor-critic setting relies on accurate value estimates of the critic. However, the combination of function approximation, temporal difference (TD) learning and off-policy training can lead to an overestimating value function. A solution is to use Clipped Double Q-learning (CDQ), which is used in the TD3 algorithm and computes the minimum of two critics in the TD-target. We show that CDQ induces an underestimation bias and propose a new algorithm that accounts for this by using a weighted average of the target from CDQ and the target coming from a single critic. The weighting parameter is adjusted during training such that the value estimates match the actual discounted return on the most recent episodes and by that it balances over- and underestimation. Empirically, we obtain more accurate value estimates and demonstrate state of the art on several OpenAI gym tasks. In recent years it was shown that reinforcement learning algorithms are capable of solving very complex tasks, surpassing human expert performance in games like Go, Starcraft (DeepMind) or Dota (OpenAI). However, usually a large amount of training time is needed to achieve these (e.g. 45,000 years of gameplay for Dota). For many important problems (e.g. in robotics) it is prohibitively expensive for the reinforcement learning agent to interact with its environment that much. This makes it difficult to apply such algorithms in the real world. Off-policy reinforcement learning holds the promise of being more data-efficient than on-policy methods as old experience can be reused several times for training. Unfortunately, the combination of temporal-difference (TD) learning, function approximation and off-policy training can be unstable, which is why it has been called the deadly triad (; van). If the action space is discrete, solutions like Double DQN are very effective at preventing divergence of the value estimates by eliminating an otherwise prevailing overestimation bias. For continuous action spaces, which characterize many tasks, it was shown that Double DQN can not solve the overestimation problem. In an actor-critic setting it is important that the value estimates of the critic are accurate in order for the actor to learn a policy from the critic. The TD3 algorithm uses Clipped Double Q-learning (CDQ) to produce a critic without an overestimation bias, which greatly improved the performance of the algorithm. In CDQ two critics are trained at the same time and the TD target for both of them is the minimum over the two single TD targets. While the authors note that the CDQ critic update tends to underestimate the true values, this is not further examined. We show that this underestimation bias occurs in practice and propose a method that accounts for over-and underestimation of the critic at the same time. Similarly to CDQ we train two function approximators for the Q-values, but we regress them not on the same quantity. The TD target for each of the two critics is a weighted average of the single TD target for that critic and the TD target from CDQ. The weighting parameter is learned by comparing the value estimates for the most recent state-action pairs with the observed discounted returns for these pairs. As the one term of the average has an underestimation bias while the other one has an overestimation bias, the weighted average balances these biases and we show empirically that this method obtains much more accurate estimates of the Q-values. We verify that the more accurate critics improve the performance of the reinforcement learning agent as our method achieves state of the art on a range of continuous control tasks from OpenAi gym. To guarantee reproducibility we open source our code which is easy to execute and evaluate our algorithm on a large number of different random seeds. The Deterministic Policy Gradient algorithm (DPG) learns a deterministic policy in an actor-critic setting. This work was extended to the Deep Deterministic Policy Gradient algorithm by using multi-layer neural networks as function approximators. The Twin Delayed Deep Deterministic policy gradient algorithm (TD3) adds three more components to DDPG and achieves state of the art . First, the actor is updated less frequently than the critic, to allow for more accurate critic estimates before they are used for the actor. Second, in the critic update noise is added to the actions proposed by the actor. While these two extensions are introduced to decrease the variance in the policy gradient, the third one, Clipped Double Q-learning, aims at preventing an overestimation bias. The use of two Q-estimators was first proposed in the Double Q-learning algorithm. The two estimates are combined in the TD target such that determining the maximizing action is decoupled from computing the value for that action. Later it was proposed to use the target network (whose parameters are periodically set to the current parameters or are an exponentially weighted moving average of them) for one of the two value estimates. This eliminates the need to train two networks. While this works well for discrete actions, versions of double Qlearning adapted to the actor-critic setting were shown to still suffer from an overestimation bias. Other approaches that aim at preventing overestimation bias in Q-learning have averaged the Q-estimates obtained from snapshots of the parameters from different training steps or used a bias correction term. Balancing between over-and underestimating terms in the Q-estimates has been done for a discrete action space. The work investigates multi-armed bandit problems and an underestimation bias is reported for Double Q-learning, while Q-learning with a single estimator is reported to overestimate the true values. Similarly to our approach a weighting is introduced in the TD target. Different to us, the weighting parameter is not learned by taking actual samples for the value estimator into account, but the parameter is set individually for each state-action pair used to train the Q-network according to a function that computes the minimum and maximum Q-value over all actions for the given state. Finding these optimal actions for every transition on which the Q-networks are trained becomes infeasible for continuous action spaces. Divergence of Q-values has been investigated in several recent works van. Of them only in the case of a continuous action space is considered. In their analysis it is investigated under which conditions a certain approximation of the Q-value updates is a contraction in the sup norm. From that an algorithm is derived that does not need multiple critics or target networks. The downside is that it is very compute intensive. We consider model-free reinforcement learning for episodic tasks with continuous action spaces. An agent interacts with its environment by selecting an action a t ∈ A in state s t ∈ S for every discrete time step t. The agent receives a scalar reward r t and observes the new state s t+1. The goal is to learn a policy π: S → A that selects the agents actions in order to maximize the sum of future discounted rewards For a given state-action pair (s, a) the value function is defined as Q π (s, a):= E si∼pπ,ai∼π [R t |s, a], which is the expected return when executing action a in state s and following π afterwards. We write π φ for the policy with parameters φ, that we learn in order to maximize the expected return J(φ) = E si∼pπ,ai∼π [R 0]. The parameters can be optimized with the gradient of J w.r.t. the policy parameters φ. The deterministic policy gradient is given by In practice the value function Q π is not given and has to be approximated. This setting is called actor-critic, the policy is the actor and the learned value function has the role of a critic. The Q-learning algorithm tries to learn the value function with TD learning , which is an incremental update rule and aims at satisfying the Bellman equation. is a variant of this algorithm and can be used to learn the parameters θ of an artificial neural network Q θ: S × A → R that approximates the value function. The network is updated by regressing its value at (s t, a t) to its 1-step TD targets where Qθ, πφ are the target networks of Q φ, π θ and the corresponding parametersφ andθ are updated according to a exponential moving average:θ ← τ θ + (1 − τ)θ, similarly for φ. If a learned critic Q θ is used in the deterministic policy gradient (eq. 1), the actor π φ is updated through Q θ, which in turn is learned with the rewards obtained from the environment. This means that the actor requires a good critic to be able to learn a well performing policy. Recently, it was shown, that using Q-learning in such an actor-critic setting can lead to an overestimation of the. This is problematic if the overestimation in the critic occurs for actions that lead to low returns. To avoid this, Clipped Double Q-learning was proposed. In this approach two Q-networks (Q θ1, Q θ2) are learned in parallel. They are trained on the same TD target, which is defined via the minimum of the two Q-networks The authors note that this can lead to an underestimation bias for the Q-values, but argue that this is not as bad as overestimating. A big advantage of CDQ is that the Q-estimates do not explode, which otherwise can sometimes happen and is usually followed by a breakdown in performance. Apart from that, an over-or underestimation bias would not be problematic if all values are biased by the same constant value. It becomes a problem if the bias of the value estimates for different state-action pairs differs. Then the critic might reinforces the wrong actions. If this happens and in a given state an action is erroneously given a high value by the critic, the actor is reinforced to choose the corresponding action. This increases the probability that the agent selects that action the next time when it is in that (or a similar) state. The agent will receive a low reward, which leads to a decrease of performance. But the critic can correct itself on the new experience which will eventually be propagated through to the actor. If on the other hand, the critic underestimates the Q-value of a good action, the actor is trained to never try this action. In this case the critic might never be corrected as experience opposing the critics believe is never encountered. While this is a simplistic picture of the ongoing learning dynamics, it can give a good intuition, why both cases should be prevented if possible. It is obvious that taking the minimum over two estimates can lead to an underestimation bias. To check if this also occurs in practice, we conducted an experiment, where we examined the Q-value estimates of different agents. We trained an TD3 agent that uses CDQ as defined in eq. 3 and one TD3 agent that uses instead of CDQ the critic updates of as defined in eq. 2. We trained on three different environments from OpenAi gym. Periodically, we sampled 1000 state-action pairs from the replay-buffer and computed the value estimate of the critic. We approximated the true values for each state-action pair by rolling out the current policy 50 times from that pair onwards and averaged the observed discounted return. The for the average value of each time step are shown in the first row of Figure 1. Similarly to previous work , we observe that the DDPG-style updates of the Q-network lead to an overestimation bias. For CDQ we indeed observe an underestimation bias as the value estimates are significantly lower than the true values. We propose Balanced Clipped Double Q-learning (BCDQ), a new algorithm to learn the critic with the goal of reducing the bias. We adopt the idea of two Q-networks, but train them on different TD Figure 1: Measuring estimation bias in the Q-value estimates of DDPG, CDQ and Balanced Clipped Double Q-learning (BCDQ) on three different OpenAI gym environments. The first row shows the estimates of DDPG and CDQ and it can be seen that DDPG leads to an overestimation bias, while CDQ leads to an underestimation bias. In the second row the value estimates of BCDQ are shown. It can be observe that the BCDQ estimates are more accurate and do not exhibit a clear bias in any direction. Hopper-v3 HalfCheetah-v3 Figure 2: The plots show the average over 10 runs of the weighting parameter β for three OpenAI gym environments. targets. The TD target y k for the k-th Q-network Q θ k, k ∈ {1, 2} is defined as a weighted average of the network itself and the minimum of both networks where β ∈. The first term corresponds to the TD target according to DDPG and second term corresponds to the TD target of CDQ. While the first term tends to overestimate, the second term tends to underestimate the true Q-values. Correctly weighting between them can correct for this bias. However, setting β manually is difficult. The perfect β that maximally reduces bias may change from environment to environment and also over the time of the training process. Consequently, we adjust β over the course of the training. As the goal is to minimize bias and since β controls in which direction more bias is introduced, we use samples of the Q-values to learn β. After every episode we compute for every seen state-action pair (s t, a t) the actual discounted future return from that pair onwards R t = T i=t γ i−t r i, which is a sample for the quantity the Q-networks Q θ k (s t, a t) try to estimate. If the Q-estimates are higher than R t, they overestimated and β should be decreased to give the "min" term in eq. 4 more weight. If on the other hand the Q-estimates are lower, we observe the case of underestimation and β should be increased. This behaviour can be achieved by Initialize critic networks Q θ1, Q θ2, and actor network π φ with random parameters θ 1, θ 2, φ Initialize target networksθ 1 ← θ 1,θ 2 ← θ 2,φ ← φ, set k = 0 and β ∈ Initialize replay buffer B for t = 1 to total timesteps do Select action with exploration noise a ∼ π φ (s) +, ∼ N (0, σ) and observe reward r, new state s and binary value d indicating if the episode ended Store transition tuple (s, a, r, s, d) in B and set k ← k + 1 if k ≥ beta update rate and if t mod actor delay rate = 0 then // Update φ by the deterministic policy gradient: minimizing the following objective w.r.t. β: where we restrict β to be in the interval, E is the number of episodes we optimize over, s (j) t is the t-th state in the j-th considered episode (similarly for the actions a (j) t ), T j is the number of time steps in episode j and R (j) t are the future discounted returns. The parameter β is updated every time the sum over all time steps in the episodes since the last update, E j=1 T j, exceeds a fixed threshold. We set this threshold to be the maximum number of episode steps that are possible. To optimize β we use stochastic gradient descent. We note that learning β increases the computational complexity only minimal, as it is just one parameter that has to be optimized. To evaluate the objective in eq. 5, a further forward path through the Q-network is performed, but no backward path is needed in the training. We evaluated the accuracy of the value estimates of BCDQ and report the in the second row of Figure 1. It can be seen, that compared to the other methods BCDQ approximates the true Qvalues much better. This indicates that the weighting parameter β can indeed be adjusted over the course of the training such that the two opposing biases cancel each other out. The behaviour of β is visualized in Figure 2 as an average over 10 runs per environment. For the Hopper task it can be observed that after some time the parameter β gets very close to zero, which corresponds to using only CDQ to update the critic. For HalfCheetah, the CDQ term is not weighted very high. This is explainable as in Figure 1 it can be seen that CDQ induces a large bias on this -2.66 ± 0.17 -2.68 ± 0.16 -6.75 ± 0.48 -3.61 ± 0.23 task. Adjusting β over time allows to put more weight on the term that currently gives a more accurate estimate. This prevents the accumulation of errors introduced by bootstrapping in the TD target. From the plots it can also be seen that treating β as an hyperparameter might be difficult, as it would have to be tuned for every environment. Furthermore, leaving β fixed could not account for the changing learning dynamics over the course of training. In Figure 2 it can be seen that different drifts exist in each environment. For example in Walker2d the learned β decreases on average after an initial stabilization period. Since the two Q-networks are not trained on the same target as it is the case for CDQ, the difference between the predictions of the two Q-networks will be higher. This suggest that -similarly to ensemble methods -the average of the two predictions might be an even better estimator. Following that rationale, in our algorithm the critic that teaches the actor is the average of the predictions of the two Q-networks. As a of the above discussion, we propose the Balanced Twin Delayed Deep Deterministic policy gradient algorithm (BTD3), which builds on TD3. Differently to TD3, our algorithm uses BTDQ instead of CDQ to update the critics. For the learning of the actor the predictions of the two critics are averaged instead of using only the first critic. The BTD3 algorithm is shown in Algorithm 1. We evaluate our algorithm on a range of challenging continuous control tasks from , which makes use of the physics engine (version 2.0). To guarantee an apples-to-apples comparison with TD3, we extended the original source code of TD3 with our method and evaluate our algorithm with the default hyperparameters of TD3 for all tasks except for Humanoid-v3. We observed that TD3 does not learn a successful policy on Humanoid-v3 with the default learning rate of 0.001, but we found that TD3 does learn if the learning rate for both actor and critic is reduced. Consequently, we set it to 0.0001 for this task and did the same for BTD3. We set the learning rate for the weighting parameter β to 0.05 and initialize β = 0.5 at the beginning of the training for all environments. As is done in TD3, we reduce the dependency on the initial parameters by using a random policy for the first 10000 steps for HalfCheetah-v3, Ant-v3, Humanoid-v3 and the first 1000 steps for the remaining environments. After that period we add Gaussian noise N (0, 0.1) to each action in order to ensure enough exploration. During training the policy is evaluated every 5000 environment steps by taking the average over the episode reward obtained by rolling out the current policy without exploration noise 10 times. For each task and algorithm we average the of 10 trials each with a different random seed, except for Humanoid-v3, where we used 5 trials. We compare our algorithm to the state of the art continuous control methods SAC Haarnoja et al. (2018a) (with learned temperature parameter Haarnoja et al. (2018b) ), TD3 and to. For both, SAC and TD3, we used the source code published by Hopper-v3 Ant-v3 Humanoid-v3 Reacher-v2 The learning curves are shown in Figure 3. For all tasks BTD3 matches or outperforms TD3. Furthermore, it performs significantly better than SAC and DDPG. In Table 1 the are presented in terms of the average maximum episode reward. In order to compute that statistic, for each trial we computed the maximum over the evaluations that were executed all 5000 time steps, where the evaluations are itself the average over 10 rollouts of the current policy. Afterwards, we computed the average of this value over the different trials. The show that the best policies of BTD3 achieve significantly higher episode rewards than the best policies of the other methods. To further understand the influence of the dynamic weighting scheme we trained BTD3 with a fixed value for β. We evaluated for the values β ∈ {0.00, 0.25, 0.50, 0.75, 1.00}, where β = 0.00 corresponds to TD3 and β = 1.00 corresponds to DDPG. The averaged over 10 runs are shown in Figure 4. From the plots we can make two observations. First, it is essential that β is adjusted during the training. For any of the considered values of β leaving it fixed leads to a worse performance compared to BTD3 and in most cases also worse than TD3. In Figure 2 it was shown that the adjusted weighting parameter is on average over many runs attracted to different values depending not only on the environment but also on the timestep during training. The dynamic adjustment to prevent accumulating errors is not possible when β is fixed. Second, it is surprising that fixed values for β that would seem promising to try from Figure 2 can perform worse than other fixed values. For example inspecting the plots in Figure 2 the value β = 0.75 seems a good fit for the HalfCheetah environment. But the evaluation shows that β = 0.25 and β = 0.50 perform better. This further supports the hypothesis that the most important part about BCDQ is the dynamic adjustment of the weighting parameter. Figure 4: Learning curves for four different continuous control tasks from OpenAi gym over 10 random seeds each. The show algorithms are BTD3 and versions of it with a fixed value of β. For each algorithm the curves show the mean over 10 runs with different random seeds and are filtered with a uniform filter of size 15. We showed that Clipped Double Q-learning (CDQ) induces an underestimation bias in the critic, while an overestimation bias occurs if just one Q-network is used. From that we derived the Balanced Clipped Double Q-learning algorithm (BCDQ) that updates the critic through a weighted average of the two mentioned update mechanisms. The weighting parameter is adjusted over the course of training by comparing the Q-values of recently visited state-action pairs with the actual discounted return observed from that pair onwards. It was shown that BCDQ achieves much more accurate value estimates by adjusting the weighting parameter. Replacing CDQ with BCDQ leads to the Balanced Twin Delayed Deep Deterministic policy gradient algorithm (BTD3). Our method achieves state of the art performance on a range of continuous control tasks. Furthermore, BCDQ can be added to any other actor-critic algorithm while it only minimally increases the computational complexity compared to CDQ. It is also be possible to use BCDQ for discrete action spaces. Evaluating that approach is an interesting area for future research.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1xyayrtDS
A method for more accurate critic estimates in reinforcement learning.
We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos. As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar images derived from frames in the ImageNet Video Object Detection dataset. We evaluate a diverse array of classifiers trained on ImageNet, including models trained for robustness, and show a median classification accuracy drop of 16\%. Additionally, we evaluate the Faster R-CNN and R-FCN models for detection, and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points. Our analysis shows that natural perturbations in the real world are heavily problematic for current CNNs, posing a significant challenge to their deployment in safety-critical environments that require reliable, low-latency predictions. Despite their strong performance on various computer vision benchmarks, convolutional neural networks (CNNs) still have many troubling failure modes. At one extreme,`padversarial examples can cause large drops in accuracy for state of the art models with visually imperceptible changes to the input image BID4. But since carefully crafted`pperturbations are unlikely to occur naturally in the real world, they usually do not pose a problem outside a fully adversarial context. To study more realistic failure modes, researchers have investigated benign image perturbations such as rotations & translations, colorspace changes, and various image corruptions. However, it is still unclear whether these perturbations reflect the robustness challenges commonly arising in real data since the perturbations also rely on synthetic image modifications. Recent work has therefore turned to videos as a source of naturally occurring perturbations of images [6, BID0 . In contrast to other failure modes, the perturbed images are taken from existing image data without further modifications that make the task more difficult. As a , robustness to such perturbations directly corresponds to performance improvements on real data. However, it is currently unclear to what extent such video perturbations pose a significant robustness challenge. Azulay and Weiss BID0 only provide anecdotal evidence from a small number of videos. While work with a larger video dataset to obtain accuracy estimates, they only observe a small drop in accuracy of around 2.7% on videoperturbed images, suggesting that small perturbations in videos may not actually reduce the accuracy of current CNNs significantly. We address this question by conducting a thorough evaluation of robustness to natural perturbations arising in videos. As a cornerstone of our investigation, we introduce ImageNet-Vid-Robust, a carefully curated subset of ImageNet-Vid. In contrast to earlier work, all images in ImageNet-Vid-Robust were screened by a set of expert labelers to ensure a high annotation quality and to minimize selection biases that arise when filtering with CNNs. Overall, ImageNet-Vid-Robust contains 22,668 images grouped into 1,145 sets of temporally adjacent and visually similar images of a total of 30 classes. We then utilize ImageNet-Vid-Robust to measure the accuracy of current CNNs to small, naturally occurring perturbations. Our testbed contains over 40 different model types, varying both architecture and training methodology (adversarial training, data augmentation, etc). We find that natural perturbations from ImageNet-Vid-Robust induce a median 16% accuracy drop for classification tasks and a median 14% drop in mAP for detection tasks. Even for the best-performing model, we observe an accuracy drop of 14% -significantly larger than the 2.7% drop in over the same time horizon in the video. Our show that robustness to natural perturbations in videos is indeed a significant challenge for current CNNs. As these models are increasingly deployed in safety-critical environments that require both high accuracy and low latency (e.g., autonomous vehicles), ensuring reliable predictions on every frame of a video is an important direction for future work. The ImageNet-Vid-Robust dataset is sourced from videos contained in the ImageNet-Vid dataset, we provide more details about ImageNet-Vid in the supplementary. Next, we describe how we extracted neighboring sets of naturally perturbed frames from ImageNet-Vid to create ImageNet-Vid-Robust. A straightforward approach is to select a set of anchor frames and use nearby frames in the video with the assumption that such frames contain only small perturbations from the anchor frame. However, as FIG0 in the supplementary illustrates, this assumption is frequently broken, especially in the presence of fast camera or object motion. Instead, we collect a preliminary dataset of natural perturbations and then we manually review each of the frame sets. For each video, we first randomly sample an anchor frame and then take k = 10 frames before and after the anchor frame as candidate perturbation images. This in a dataset containing 1 anchor frame each from 1,314 videos, with approximately 20 candidate perturbation frames each BID0.Next, we curate the dataset with the help of four expert human annotators. The goal of the curation step is to ensure that each anchor frame and nearby frame is correctly labeled with the same ground truth class and that the anchor frame and the nearby frame are visually similar. For each pair of anchor and candidate perturbation frame, an expert human annotator labels whether the pair is correctly labeled in the dataset, whether the pair is similar. Asking human annotators to label whether a pair of frames is similar can be highly subjective. We took several steps to mitigate this issue and ensure high annotation quality. First, we trained reviewers to mark frames as dissimilar if the scene undergoes any of the following transformations: significant motion, significant change, or significant blur change, and additionally asked reviewers to mark each of the dissimilar frames with one of these transformations, or "other". Second, as presenting videos or groups of frames to reviewers could cause them to miss potentially large changes due to the well-studied phenomenon of change blindness, we present only a single pair of frames at a time to reviewers. Finally, to increase consistency in annotation, human annotators proceed using two rounds of review. In the first round, all annotators were given identical labeling instructions, and then individually reviewed 6500 images pairs. We instructed annotators to err on the side of marking a pair of images as dissimilar if a BID0 Note that some anchor frames may have less than 20 candidate frames if the anchor frame is near the start or end of the video. distinctive feature of the object is only visible in one of the two frames (such as the face of a dog). If an annotator was unsure about a pair he or she could mark the pair as "don't know".For the second round of review, all annotators jointly reviewed all frames marked as dissimilar, "don't know" or "incorrect". A frame was only considered similar if a strict majority of the annotators marked the pair of as "similar".After the reviewing was complete, we discarded all anchor frames and candidate perturbations that annotators marked as dissimilar or incorrectly labeled. Our final dataset contains 1,145 anchor frames with a minimum of 1, maximum of 20 and median of 20 similar frames. Given the dataset above, we would like to measure a model's robustness to natural perturbations. In particular, let A = {a 1, ..., a n} be the set of valid anchor frames in our dataset. Let Y = {y 1, ..., y n} be the set of labels for A. We let N k (a i) be the set of frames marked as similar to anchor frame a i. In our setting N k is a subset of the 2k temporally adjacent frames (plus/minus k frames from anchor).The pm-k analogues of the standard metrics for detection and classification evaluate only on the worst-case frame in the set of N k. We formally define the pm-k analogues for the standard metrics for classification and detection (acc pmk and mAP pmk) in the supplementary. We evaluate a testbed of 50 classification models and 3 state of the art detection models on ImageNet-Vid-Robust. We first discuss the various types of classification models evaluated with pm-k classification metric. We then study the per-class accuracies to study whether our perturbations exploits a few "hard" classes or affects performance uniformly across classes. Second we use the bounding box annotations inherited from ImageNet-VID to study the effect of detection models evaluated on ImageNet-Vid-Robust using the pm-k metric. We then analyze the errors made on the detection adversarial examples to isolate the effects of localization errors vs classification errors. In FIG1, we plot acc orig versus acc pmk for all classification models in our test bed and find that there is a surprisingly linear relationship between acc orig and acc pmk across all 48 models in our test bed. We note the similarity of this plot to FIG0 in BID12.1578 out 22668 frames in ImageNet-Vid-Robust have multiple correct classification labels, due to multiple objects in the frame. To handle this in a classification set- Each data point corresponds to one model in our testbed (shown with 95% Clopper-Pearson confidence intervals). Each "perturbed" frame was taken from a neighborhood of a maximum 10 adjacent frames to the original frame in a 30 FPS video. This allows the scene to change for roughly 0.3s. All frames were reviewed by humans to confirm visual similarity to the original frames.ting, we opt for the most conservative approach: we count a prediction as correct if the model predicts any of the classes for a frame. We note that this is a problem that plagues many classification datasets, where objects of multiple classes can be in an image BID12 but there is only one true label. We considered 5 models types of increasing levels of supervision. We present our full table of classification accuracies in the supplementary material, and for representative models from each model type in Table 1.ILSVRC Trained As mentioned in??, leveraging the WordNet hierarchy enables evaluating models available from trained on the 1000 class ILSVRC challenge on images in ImageNet-Vid-Robust directly. We exploit this to evaluate a wide array of model architectures against our natural perturbations. We note that this test set is a substantial distribution shift from the original ILSVRC validation set that these models are tuned for. Thus we will expect the benign accuracy acc orig to be lower than the comparable accuracy on the ILSVRC validation set. However the quantity of interest for this experiment is the difference between the original and perturbed accuracies accuracies acc origacc pmk, which should be less sensitive to an absolute drop in acc orig.ILSVRC Trained with Noisy Augmentation One hypothesis for the accuracy drop is that subtle artifacts and corruptions introduced by video compression schemes could introduce a large accuracy drop when evaluated on these corrupted frames. The worst-case nature of the pm-k metric could be biasing evaluation towards these corrupt frames. One model for these corruptions are the perturbations introduced in. To test this hypothesis we evaluate models augmented with a subset of the perturbations (Gaussian noise Gaussian blur, shot noise, contrast change, impulse noise, JPEG compression) found in. We found that this augmentation scheme did little to help robustness against our perturbations. ILSVRC Trained for L 2 /L 1 Robustness We evaluate the best performing robust model against the very strong L 2 /L 1 attacks. We find that this model does have a slightly smaller performance drop than both ILSVRC and ILSVRC trained with noise augmentation but the difference is well within the error bars induced by the small size of our evaluations set. We also note that this robust model gets significantly lower original and perturbed accuracy than examples from either of the model types above. ILSVRC Trained + Finetuned on ImageNet-VID To adapt to the 30 class problem and the different domain of videos we fine tune several network architectures on the training set in ImageNet VID. We start with a base learning rate of 1e 4 and train with the SGD optimizer until the validation accuracy plateaus. We trained using cross entropy loss using the largest object in the scene as the label during training, as we found this performed better than training using a multi-label loss function. After training for 10 epochs we evaluate on ImageNet-Vid-Robust. These models do improve in absolute accuracy over their ILSVRC pretrained counterparts (12% for a ResNet50). However, this improvement in absolute accuracy does not significantly decrease the accuracy drop induced by natural perturbations. Finally, we analyze whether additional supervision, in the form of bounding box annotations, improves robustness. To this end, we train the Faster R-CNN detection model with a ResNet 50 backbone on ImageNet Vid. Following standard practice, the detection backbone is pre-trained on ILSVRC. To evaluate this detector for classification, we assign the score for each label for an image as the score of the most confident bounding box for that label. We find that this transformation reduces accuracy compared to the To analyze the generalizability of natural perturbations to other tasks, we next analyze their impact on the object localization and detection tasks. We report for two related tasks: object localization and detection. Object detection is the standard computer vision task of correctly classifying an object and regressing the coordinates of a tight bounding box containing the object. " Object localization", meanwhile, refers to the only the subtask of regressing to the bounding box, without attempting to correctly classify the object. This is an important problem from a practical perspective (for example, the size and location of an obstacle may be more important for navigation than the category), as well as from an analytical perspective, as it allows analyzing mistakes orthogonal to classification errors. For example, it may be the case that natural perturbations cause misclassification errors frequently, as it may be natural to mistake a cat for a fox, but cause few localization errors. We present our using the popular Faster R-CNN and R-FCN architectures for object detection and localization in TAB2. We first note the significant drop in mAP of 12 15% for object detection due to perturbed frames for both the Faster R-CNN and R-FCN architectures. Next, we show that localization is indeed easier than detection, as the mAP increases significantly (e.g., from 61.8 to 75.5 for Faster R-CNN with ResNet 50 backbone). Perhaps surprisingly, however, switching to the localization task does not improve the delta between original and perturbed frames, indicating that natural perturbations induce both classification and localization errors. An advantage of using the ImageNet-Vid dataset as the source of our dataset is that all 30 object Anchor frame Discarded frame Anchor frame Anchor frame Discarded frame Discarded frame FIG0: Temporally adjacent frames may not be visually similar. We visualize three randomly sampled frame pairs where the nearby frame was marked during human review as "dissimilar" to the anchor frame and discarded from our dataset. Classification accuracy is defined as: DISPLAYFORM0 Submitted to 33rd Conference on Neural Information Processing Systems (NeurIPS 2019). Do not distribute. as: DISPLAYFORM1 Which simply corresponds to picking the worst frame from the each N k (a i) set before computing 20 misclassification accuracy. Detection The standard metric for detection is mean average precision of the predictions at a fixed ). We define the pm-k analog of mAP by replacing each anchor frame in the dataset with a nearby 30 frame that minimizes the per-image average precision. Note that as the category-specific average 31 precision is undefined for categories not present in an image, we minimize the average precision 32 across categories for each frame rather than the mAP. We then define the pm-k mAP as follows, with 33 a slight abuse of notation to denote y b as the label for frame b: DISPLAYFORM0 In FIG1, we plot the relationship between perturbed accuracy and and perturbation distance (i.e the 36 k in the pm-k metric described in Section 3). We note that the entire x-axis in FIG1 corresponds 37 to a temporal distance of 0s to 0.3s between the original and perturbed frames. We study the effect of our peturbations on the 30 classes found in ImageNet-Vid-Robust to 40 determine whethre our performance drop was concentrated in a few "hard" classes. "artificial" nature of L p attacks, recent work has proposed more realistic modifications to images. Engstrom et. al. BID4 study an adversary that performs minor rotations and translations of the input, Hosseni et. al. BID12 1e-5 for all models. We additionally detail hyperparameters for detection models in
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SklRoy3qaN
We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.
Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey. Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data. The recent work of Super Characters method using two-dimensional word embedding achieved state-of-the-art in text classification tasks, showcasing the promise of this new approach. In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embedding to address the problem of classification on tabular data. For each input of tabular data, the features are first projected into two-dimensional embedding like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification. Experimental have shown that the proposed SuperTML method have achieved state-of-the-art on both large and small datasets. In data science, data is categorized into structured data and unstructured data. Structured data is also known as tabular data, and the terms will be used interchangeably. Anthony Goldbloom, the founder and CEO of Kaggle observed that winning techniques have been divided by whether the data was structured or unstructured BID12. Currently, DNN models are widely applied for usage on unstructured data such as image, speech, and text. According to Anthony, "When the data is unstructured, its definitely CNNs and RNNs that are carrying the day" BID12. The successful CNN model in the ImageNet competition BID8 has outperformed human Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.for image classification task by ResNet BID6 since 2015.On the other side of the spectrum, machine learning models such as Support Vector Machine (SVM), Gradient Boosting Trees (GBT), Random Forest, and Logistic Regression, have been used to process structured data. According to a recent survey of 14,000 data scientists by , a subdivision of structured data known as relational data is reported as the most popular type of data in industry, with at least 65% working daily with relational data. Regarding structured data competitions, Anthony says that currently XGBoost is winning practically every competition in the structured data category BID4. XGBoost BID2 is one popular package implementing the Gradient Boosting method. Recent research has tried using one-dimensional embedding and implementing RNNs or one-dimensional CNNs to address the TML (Tabular Machine Learning) tasks, or tasks that deal with structured data processing BID7 BID11, and also categorical embedding for tabular data with categorical features BID5. However, this reliance upon onedimensional embeddings may soon come to change. Recent NLP research has shown that the two-dimensional embedding of the Super Characters method BID9 is capable of achieving state-of-the-art on large dataset benchmarks. The Super Characters method is a two-step method that was initially designed for text classification problems. In the first step, the characters of the input text are drawn onto a blank image. In the second step, the image is fed into two-dimensional CNN models for classification. The two-dimensional CNN models are trained by fine-tuning from pretrained models on large image dataset, e.g. ImageNet. In this paper, we propose the SuperTML method, which borrows the concept of the Super Characters method to address TML problems. For each input, tabular features are first projected onto a two-dimensional embedding and fed into fine-tuned two-dimensional CNN models for classification. The proposed SuperTML method handles the categorical type and missing values in tabular data automatically, without need for explicit conversion into numerical type values. The SuperTML method is motivated by the analogy between TML problems and text classification tasks. For any sample given in tabular form, if its features are treated like stringified tokens of data, then each sample can be represented as a concatenation of tokenized features. By applying this paradigm of a tabular sample, the existing CNN models used in Super Characters method could be extended to be applicable to TML problems. As mentioned in the introduction, the combination of twodimensional embedding (a core competency of the Super Characters methodology) and pre-trained CNN models has achieved state-of-the-art on text classification tasks. However, unlike the text classification problems studied in BID9, tabular data has features in separate dimensions. Hence, generated images of tabular data should reserve some gap between features in different dimensions in order to guarantee that features will not overlap in the generated image. SuperTML is composed of two steps, the first of which is two-dimensional embedding. This step projects features in the tabular data onto the generated images, which will be called the SuperTML images in this paper. The conversion of tabular training data to SuperTML image is illustrated in Figure 1, where a collection of samples containing four tabular features is being sorted. The second step is using pretrained CNN models to finetune on the generated SuperTML images. Figure 1 only shows the generation of SuperTML images for the training data. It should be noted that for inference, each instance of testing data goes through the same preprocessing to generate a SuperTML image (all of which use the same configuration of two-dimensional embedding) before getting fed into the CNN classification model. the generated SuperTML images. 9: return the trained CNN model on the tabular data known as SuperTML VF, is described in Algorithm 1.To make the SuperTML more autonomous and remove the dependency on feature importance calculation done in Algorithm 1, the SuperTML EF method is introduced in Algorithm 2. It allocates the same size to every feature, and thus tabular data can be directly embedded into SuperTML images without the need for calculating feature importance. This algorithm shows even better than 1, which will be described more in depth later in the experimental section. The data statistics from UCI Machine Learning Repository is shown in TAB2. " This is perhaps the best known database to be found in the pattern recognition literature"1. The Iris dataset is widely used in machine learning courses and tutorials. FIG1 shows an example of a generated SuperTML image, created using Iris data. The experimental of using SEnet-154 shown in Table 2 for each feature of the sample do 3:Draw the feature in the same font size without overlapping, such that the total features of the sample will occupy the image size as much as possible. For this dataset 2, we use SuperTML VF, which gives features different sizes on the SupterTML image according to their importance score. The feature importance score is obtained using the XGBoost package BID2. One example of a SuperTML image created using data from this dataset is shown in FIG1. The in Table 2 shows that the SuperTML method obtained a slightly better accuracy than XGBoost on this dataset. The task of this Adult dataset 3 is to predict whether a persons income is larger or smaller than 50,000 dollars per year based on a collection of surveyed data. For categorical features that are represented by strings, the Squared English Word (SEW) method BID10 Figure 3. SuperTML VF image example from Adult dataset. This sample has age = 59, capital gain = 0, capital loss = 0, hours per week = 40, fnlweight = 372020, education number = 13, occupation = "?" (missing value), marital status = "Married-civ-spouse", relationship = "Husband", workclass = "?" (missing value), education = "Bachelors", sex = "Male", race = "White", native country = "United-States".is used. One example of a generated SuperTML image is given in Figure 3. Table 2 shows the on Adult dataset. We can see that on this dataset, the SuperTML method still has a higher accuracy than the fine-tuned XGBoost model, outperforming it by 0.32% points of accuracy. The Higgs Boson Machine Learning Challenge involved a binary classification task to classify quantum events as signal or . It was hosted by Kaggle, and though the contest is over, the challenge data is available on opendata BID1. It has 25,000 training samples, and 55,000 testing samples. Each example has 30 features, each of which is stored as a real number value. In this challenge, AMS score ( is used as the performance metric. FIG3 shows two examples of generated SuperTML images. TAB3 shows the comparison of different algorithms. The DNN method and XGBoost used in the first two rows are using the numerical values of the features as input to the models, which is different from the SuperTML method of using two-dimensional embeddings. It shows that SuperTML EF method gives the best AMS score of 3.979. In addition, the SuperTML EF gives better than SuperTME VF , which indicates SuperTML method can work well without the calculation of the importance scores. The proposed SuperTML method borrows the idea of twodimensional embedding from Super Characters and transfers the knowledge learned from computer vision to the structured tabular data. Experimental shows that the proposed SuperTML method has achieved state-of-the-art on both large and small tabular dataset TAB2
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
r1MCjkn5pV
Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet.
Learning rich representations from predictive learning without labels has been a longstanding challenge in the field of machine learning. Generative pre-training has so far not been as successful as contrastive methods in modeling representations of raw images. In this paper, we propose a neural architecture for self-supervised representation learning on raw images called the PatchFormer which learns to model spatial dependencies across patches in a raw image. Our method learns to model the conditional probability distribution of missing patches given the context of surrounding patches. We evaluate the utility of the learned representations by fine-tuning the pre-trained model on low data-regime classification tasks. Specifically, we benchmark our model on semi-supervised ImageNet classification which has become a popular benchmark recently for semi-supervised and self-supervised learning methods. Our model is able to achieve 30.3% and 65.5% top-1 accuracies when trained only using 1% and 10% of the labels on ImageNet showing the promise for generative pre-training methods. Deep neural networks are capable of learning rich abstract representations from raw high dimensional data in an end-to-end fashion . A big weakness of these neural networks is the reliance on abundant labeled datasets. Self-supervised and unsupervised representation learning approaches have been proposed to address this problem . It is still an open problem in the field to figure out how to take advantage of large unlabeled datasets, use them for learning rich representations and improving the data-efficiency of supervised learning systems. A classic example of successful unsupervised learning of rich representations is word2vec where the authors showed that distributed vector representations of words could be learned by contrastively predicting the neighboring words given surrounding words. The shift from word embeddings to sequence embeddings in recent times began when showed that pre-trained sequence to sequence autoencoders on text corpora could be useful for a number of downstream tasks such as text classification and sentiment analysis. Followed by this, it was shown in that language modeling is useful in providing deep contextual sentence embeddings that could be fine-tuned on a number of natural language understanding tasks. is another example of such a success. In more recent times, the transformer has emerged as a powerful architecture to model complex dependencies across a long sequence using global self-attention. OpenAI Generative Pre-Training (GPT) showed that training large Transformer models on BooksCorpus could lead to rich and useful representations that could be fine-tuned on a variety of downstream tasks covering language understanding, commonsense reasoning and question-answering. The biggest success in unsupervised pre-training was achieved by BERT where the assumption for using causal language modeling was pointed out as unnecessary and it was shown that training deep transformers in a bi-directional fashion to perform the objective of masked language modeling and next sentence prediction could lead to rich and useful representations covering a wide span of natural language understanding downstream tasks. Therefore, it is useful to address the following question: How do we translate the successes of masked language modeling and deep transformers to images? Unlike language which is a layer of abstraction to be able to understand the world and communicate thoughts, images are raw sensory observations. It is therefore much harder to model the relationship across pixels both spatially and temporally simply because the dimensionality is much higher. Let's first look at the question of whether generative pre-training is well suited for images or not. There is a belief that generative approaches are more suited to abstract inputs such as language wordpieces but not for less abstract entities like pixels or audio waveform bits (van den ; ; ;). While it may as well turn out to be true, it is useful to investigate how far we could push generative approaches for pre-training even on domains they are not well suited for, such as images. A successful example of such an approach is the adversarial method BiGAN . While BiGAN (and BigBiGAN) are meant for learning useful highlevel representations of raw images, they still retain the generative modeling aspect of unsupervised learning by learning to jointly model an encoder and a generator using the generative adversarial loss. On the other hand, there has been incredible progress in recent years in generative modeling of raw pixels and audio waveforms using maximum likelihood. Beginning with (b), we have seen successes in generating diverse images by modeling the conditional distribution of pixels given context of neighboring pixels. WaveNet (a) is an example of successful deployment of such techniques for modeling the distribution of raw audio waveforms when conditioned on text. adopt a similar technique for generating future frames of a video conditioned on the past. More recently, have pushed on using strided self-attention to achieve high-quality unconditional samples of ImageNet building upon successes of and . Therefore, it is very reasonable to ask ourselves the following question: If generative models can work on such high dimensional data, is it necessarily the case that they would be ill-suited from a representation learning perspective? If no, how do we leverage these successes for representation learning? Further, how do we take inspiration from the big representation learning successes in natural language processing and the generative modeling successes for images and audio and design a representation learning approach for images? As far as representation learning on images goes, the state-of-the-art systems at the moment are contrastive methods. Specifically, Contrastive Predictive Coding (CPC) (van den) which learns to contrastively predict the future given the past by sampling negatives across and between sequences has been shown to be a universally powerful representation learning approach for multiple modalities (audio, images, text, control). (Hénaff et al., 2019) and achieve impressive linear classifier probe metrics for their representations that were trained contrastively to maximize mutual information across views and space. (Hénaff et al., 2019) also show that these representations could be used for downstream tasks such as semi-supervised image classification in the low-data regime going on to record impressive in the 1% and 10% ImageNet classification. While such impressive have been shown using the contrastive methods, methods of such quality for generative approaches are ye to be shown on images. Secondly, CPC and related methods adopt convolutional architectures for learning the representations. We believe it is worth the research effort to investigate architectures that incorporate self-attention so that we could translate language domain's success to other domains. Stand-Alone Self-Attention has shown that self-attentive architectures could be designed to match convolutional architectures on image classification and object detection. Such a is promising in the sense that we now know that self-attentive architectures are not a limiting factor for downstream classification performance. In this paper, we attempt to inspire from a few key engineering deicisons that have benefitted the various successful approaches discussed above to motivate our design of a generative pre-training method for images. 1. Predicting subscales and low-bit depth for pixels: showed that modeling pixels by sequentially modeling the subscales and low-bit depth versions of the raw image is extremely useful. (a) also attempted to initially model 8-bit audio rather than 16-bit. Therefore, it makes sense to model the only the most significant few bits while attempting to decode pixels for representation learning. Higher order bits are more relevant for texture and finer-details and may not be crucial for representation learning performance. 2. Use of self-attention for aggregating global context: Self-Attention is an extremely powerful approach for aggregating global contextual representations across large sequences. The adoption of self-attention for images began with who used non-local layers for activity recognition. and exploit non-local layers for high-fidelity image generation. has also shown that self-attention can be used to good effect for modeling distribution of latents for likelihood-based image generation while (; ;) are examples for self-attentive density models. 3. Learning spatial dependencies across patches: CPC learns to spatially predict neighboring patches given context of surrounding patches. Image Transformers adopts self-attention that takes into account local as well as global dependencies behaving like a patch-based generative model. explot modeling spatial PixelCNNs over subscales for global image dependencies. attempt to modify CPC for image representation learning by using the patch-based data extraction and modeling dependencies in a BERT-like fashion using self-attention. Our key contributions are as follows: 1. We propose a new architecture, PatchFormer, for modeling bi-directional dependencies across patches. Our architecture learning to decode missing patches in an image by extracting represenstations of the given patches, using attention-pooling to aggregate the context, and decode the low-bit grayscale sub-sampled versions of the missing patches. Specifically, we decode only the 2-bit grayscale version of the missing patch. 2. We show that our model could be pre-trained on the unsupervised objective of decoding missing patches and fine-tuned on downstream low-data regime classification tasks. 3. We achieve somewhat competitive downstream ImageNet classification with CPC (Hénaff et al., 2019) and are surprisingly even better than the other contrastive approach for semi-supervised downstream classification, Selfie in spite of adopting a generative approach. Our patch-extraction setup is described in Figure 1. Our input images are 224x224x3. We extract 16x16 patches from this global image with a stride of 16x16. This in a grid of 14x14 patches. Among the 296 patches, we spatially mask 60% of the patches and use the remaining 78 patches. Our masks are designed with a 7x7 grid first (with the same masking ratio) and then upsampled to 14x14 with nearest-neighbor upsampling. This is to ensure that our masks are contiguous and blocky to make the prediction task harder. We then use a Transformer to spatially associate the convolutional features extracted from the nonmasked patches. Our self-attention mechanism flattens the grid and adds learned position position embeddings to each. To regularize the model, we use factorized position embeddings (similar to ) for the X and Y coordinates. We use 10 layers of masked-self-attention with 1024 dimensional embeddings and 16 attention heads. Our pointwise MLP uses the GeLU nonlinearity and a widening layer of 4x similar to BERT. We then use the extracted attention-pooled context vectors to decode the missing patches. We use a regular residual upsampling decoder to decode 8x8 versions of the missing patches originally of shape 16x16. The ground truth for the missing patches comes from the 2-bit gray-scale centre crop of these patches. We also additionally add soft-label noise to our cross entropy loss (0.01). Our model is trained with AdamWeightDecay Optimizer using a learning rate of 2e-4 and weightdecay coefficient of 0.001 with a batch size of 128 on a v3-32 cloud TPU over 300,000 update steps. Figure 1: PatchFormer Architecture: We extract a grid of non-overlapping patches from the raw image. We then pick random crops of unmasked patches for spatial jittering. This is done to ensure the model does not cheat similar to (van den). We then apply random data-augmentations to these patches such as Inception Noise (random brightness, hue, saturation and contrast) as well as arbitrarily flip the patches 20% of the times. We also randomly convert the patches to their low-bit depth versions with the depth sampled randomly from 2 to 8 and apply random grayscale blending with 20% probability. The pixels are then scaled to [-1, 1] after applying these augmentations. A convolutional neural network (ResNet-41 -a customized version of the first 3 stacks of ResNet-50) is applied to extract the embeddings for these patches. A deep self-attentive transformer then globally associates the patches across space (with added factorized position encodings) before decoding the missing patches. We decode subsampled and low-bit grayscale version of the missing patches to make the prediction task simpler and more suited for representation learning. We remove the decoder and extract the attention-pooled embedding of patches for classification (a residual layer operates on top of the attention pooled embedding). To ensure there is no mismatch between training and fine-tuning times (because of masking while pre-training), we randomly mask out 10% of the patches even while fine-tuning. While this does hurt the model in terms of good performance, it also helps by regularizing the model. We fine-tune the model using the same optimizer as used for pre-training with batch sizes in {16, 32, 64, 128, 256} and learning rates in {3e-5, 2e-4, 4e-4, 1e-3}. We also use soft-labels while fine-tuning to add some regularization to the model. Additionally, our model also employs dropout noise at the end to prevent overfitting on the low-data regime classification. The residual block we use on top of the attention-pooling is the same as what we use for our convolutional encoder described in Figure 2. We pre-train using the 1.27 million unlabeled ImageNet training dataset. We then fine-tune for image classification on 1%, 10% and 20% of the dataset and report the accuracies (top-1 and top-5) on the validation set of 50000 images for each of these subsets. We have proposed a new architecture for generative pre-training on images called the PatchFormer. We highlighted the key tricks to making our model learn useful representations for downstream classification tasks in spite of decoding pixels. We have shown that we are competitive with state-ofthe-art contrastive pre-training methods such as CPC on the low data-regime ImageNet classification benchmark.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJg1lxrYwS
Decoding pixels can still work for representation learning on images
Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix. Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive. We show how to modify full-matrix adaptive regularization in order to make it practical and effective. We also provide novel theoretical analysis for adaptive regularization in non-convex optimization settings. The core of our algorithm, termed GGT, consists of efficient inverse computation of square roots of low-rank matrices. Our preliminary experiments underscore improved convergence rate of GGT across a variety of synthetic tasks and standard deep learning benchmarks. Stochastic gradient descent is the workhorse behind the recent deep learning revolution. This simple and age-old algorithm has been supplemented with a variety of enhancements to improve its practical performance, and sometimes its theoretical guarantees. Amongst the acceleration methods there are three main categories: momentum, adaptive regularization, and variance reduction. Momentum (in its various incarnations, like heavy-ball or Nesterov acceleration) is the oldest enhancement. It has a well-developed theory, and is known to improve practical convergence in a variety of tasks, small and large. It is also easy to implement. Variance reduction is the most recent advancement; in theory and practice, it is mostly applicable to convex optimization, and is thus less influential in deep learning. This brings us to adaptive regularization: the most sophisticated, hard to implement, and debated acceleration method. While state-of-the-art optimizers such as Adam and AdaGrad (; BID13 do use adaptive regularization, they do so in a very limited form: with diagonal matrices, often marketed as per-coordinate adaptive learning-rate methods. Despite solid theoretical guarantees, the practical value of diagonal adaptive regularization as compared to "vanilla" SGD has been the subject of much debate BID48 . However, the efficacy of full-matrix adaptive regularization has been relatively unexplored. This is due to the prohibitive computational cost associated with full-matrix operations: full AdaGrad requires taking the inverse square root of a large matrix. In this paper, we present GGT, a practical solution to the computational problems plaguing fullmatrix adaptive regularization, making this technique scalable for modern deep models. At the heart of our method is a simple, GPU-friendly way to apply the inverse square root of the low-rank second-moment matrix of recent gradients; see FIG0 . GGT's running time is comparable to state-of-the-art optimizers. We proceed to show that full-matrix preconditioning allows for much better exploitation of anisotropic curvature in loss landscapes. First, we show synthetic experiments which demonstate clear benefits of GGT over baselines, especially when the problem is ill-conditioned. Then, we implement GGT at scale, and show that the benefits translate to faster training on standard deep learning benchmarks. Our improvement is most salient in complicated landscapes like RNN training. Our algorithm comes with theoretical guarantees. We give the first proof of convergence to firstorder critical points for an algorithm with adaptive regularization in a stochastic non-convex setting, featuring a rate which is dependent on an adaptive ratio. We show examples where our bound is stronger than that for SGD, providing some theoretical basis for our empirical findings. Since the introduction of AdaGrad BID13, diagonal adaptive regularization has been a mainstay in the machine learning practitioner's toolbox. A quick perusal of the literature shows that these methods have continued to thrive in the deep learning era, and appear in all major frameworks BID0 BID39 BID9 . By citation count (or GitHub search hits), Adam is by far the most popular adaptive optimizer for training a variety of modern deep models. For this reason, this paper's exposition is targeted towards a full-matrix dropin replacement for Adam; however, our techniques extend straightforwardly to a plethora of variants, like RMSprop BID45, Adadelta , Nadam BID12, etc. Full-matrix adaptive regularization has existed alongside the more commonly used diagonal-matrix manifestation since their common inception in BID13 ); however, a major obstacle to the scalability of these methods is the need for the storage and inversion of square matrices in the model dimension. This becomes prohibitively expensive in dimension greater than 10 4, while state-of-theart models regularly exceed 10 7 parameters. Matrix sketching has been employed to approximate the AdaGrad preconditioner BID25 BID34; however, the sketched estimate for the matrix inverse can be sensitive to noise. In the former, the authors report a 5-10× overhead over AdaGrad, even with < 10 5 model parameters; we could not find a usable GPU implementation for their requisite rank-1 QR update. BID19 propose a way to do AdaGrad with Kronecker products of full-matrix preconditioners, a more limited setting which requires knowledge of the model's structure. Finally, as we argue in Section 3.1, there is intrinsic value of "forgetting" past curvature using an exponential window. With this, a low-rank preconditioning matrix naturally arises, allowing us to bypass the computational need for sketching in the model dimension or architecture-dependent restriction of the preconditioner. Our algorithm bears a superficial resemblance to L-BFGS BID28, a version of BFGS BID6 BID15 BID18 BID43 which uses a sliding window of gradient history. Although some are viable for large-scale implementation, these quasi-Newton methods, along with (subsampled, online, cubic-regularized) Newton methods BID14 BID2 BID30 BID20 BID1 BID8 exhibit very different dynamics than the standard optimizers in deep learning, and thus have not seen widespread adoption. We find recent deep learning applications of secondorder methods (e.g. BID32 BID33) to be intriguing, though outside the scope of this paper. Recently, the role of adaptive regularization has been a hotly contested topic. In BID48, the authors suggest that properly-tuned SGD exhibits superior generalization to adaptive methods. In turn, propose switching the optimizer from Adam to SGD at the end of training, to reap the advantages of each. Influentially, Adam's convergence has been the object of recent scrutiny BID42. However, Adam continues to enjoy successful convergence in practice; the problematic construction involves pathological outlier gradients. We do not use the analyses of Adam or AMSGrad. Several parallel works BID27 BID51 BID47 BID10 BID40 BID50 have studied the convergence of adaptive methods for non-convex optimization, matching the asymptotic iteration complexity of SGD. Apart from our algorithmic contribution, our work is (to our knowledge) the first attempt to characterize the advantage of adaptivity in terms of the dimension and geometry of the optimization problem. Our main algorithmic contribution is GGT, an efficient first-order algorithm for full-matrix adaptive preconditioning. In brief, GGT uses the preconditioner from full-matrix AdaGrad, with gradient history attenuated exponentially as in Adam, and truncated to a window parameter r. The name GGT acts as a convenient mnemonic for the gradient second-moment matrix GG maintained by full-matrix AdaGrad, even though we never compute this matrix. The mathematical specification of GGT is given in Algorithm 1, in the usual model of stochastic optimization (see Section 4), with gradients ∇f (x). Notice that the coordinate-wise scaling of Adam is recovered by zeroing out the off-diagonal entries of GG.Algorithm 1 GGT adaptive optimizer 1: Input: initializer x 1, window size r, learning rate schedule {η t}, DISPLAYFORM0 Receive stochastic gradient ∇f (x t). GGT provides the power of full-matrix adaptive regularization at a cost not much larger than SGD. This crucially exploits the fact only a small window of historical gradients are used for preconditioning. The intuition for using a small window, as opposed to the entire history, is clear (and time-tested, by the ubiquity of Adam): the curvature of the loss surface changes, rendering previous gradient information obsolete. We expand on the benefits of forgetting gradients in section 3.1.The fact that the preconditioning matrix is based on a small window of gradients implies that it has low rank. GGT exploits this fact by computing the inverse square root of the empirical covariance matrix indirectly, as outlined in FIG0. In effect, instead of inverting a full matrix in the dimension of parameters, using the special matrix structure GGT inverts a matrix of dimension window-size. The remainder of this section will discuss efficient implementation and some heuristics. GGT has provable guarantees even for non-convex optimization: it is guaranteed to converge to a first-order critical point. Its rate of convergence is never significantly slower than that of SGD, and in some favorable geometric conditions, can be significantly faster. These theoretical bounds are made precise in section 4. The window parameter r should be roughly the number of copies of the model that fit in RAM; in our large-scale experiments, we use r = 200. A pessimistic but principled choice is r = Θ(1/(1 − β 2)), which truncates on the time scale of the exponential attenuation. Our key observation, highlighted in FIG0, is that the inversion of the large low-rank matrix GG can be performed by diagonalizing the small matrix G G, along with some extremely GPU-friendly matrix-vector operations. The basic intuition is contained in FIG0, but it remains to include the εI term. We derive the full update here. Let DISPLAYFORM0, and let Σ r ∈ R r× r be its top left block. Let U =: [U r U d−r], so that the columns of U r ∈ R d×r are an orthonormal basis for the column space of G, and DISPLAYFORM1 The first term is none other than an SGD update step. The rest can be computed by taking the DISPLAYFORM2 We prefer this to taking the direct SVD of G, which is > 10 times slower on GPU.Using a cyclic buffer to store and update G t, the algorithm takes O(dr 2 + r 3) (sequential) time per iteration, and O(dr) memory in total. Iterating over the model parameters to update G t incurs the same overhead cost as usual adaptive optimizers. The r × d matrix multiplication and r × r SVD operations benefit from decades of extensive hardware-level optimizations. In the experiments in Section 3, we observed a ∼ 1.3× (CNN) and ∼ 2× (RNN) running-time overhead over SGD; we note that this ratio could be even smaller in reinforcement learning (where the environment causes the time bottleneck), or universally with a more optimized implementation. Below, we list some practical suggestions for applying GGT to training large-scale models. Momentum. In order to bring GGT closer to a drop-in replacement for Adam, we can add momentum to the gradient steps: let v t ← β 1 v t−1 + ∇f (x t), and apply the preconditioner to v t to compute the update step. We use momentum in all large-scale experiments, with the standard β 1 = 0.9. We also get a small performance boost by using v t instead of the gradients to update G t. On the other hand, as long as r T, it makes little difference to choose β 2 = 1, letting the window (rather than exponential attenuation) forget stale gradient information. Interpolation with SGD. We note the possibility of decoupling the scalars ε and 1/ε which appear in the efficient update step. Appealingly, this allows the user to tune GGT's behavior to be arbitrarily close to that of SGD.Numerical concerns. For greater numerical stability, it is possible to add a small multiple of the identity matrix (we suggest 10 −6) to G G before computing its eigendecomposition, without noticeable differences in training. In this section, we present an empirical study of GGT. We begin with some simple experiments, showing that adaptive methods help in the presence of ill-conditioned optimization problems, as well as the value of limited gradient memory. Next, we evaluate the performance of GGT on largerscale deep learning tasks (and provide some additional such experiments in Appendix B). Finally, we present some interesting empirical insights on the training dynamics in deep learning models. Our visualizations of gradient spectra suggest that adaptive optimizers are indeed correcting for changing anisotropic curvature in the loss landscape. The original theorems on the behavior of adaptive first-order methods are established from the perspective of online convex optimization BID13. The dynamics are less understood on realistic loss landscapes in stochastic optimization. For this reason, we begin our experimental section with some simple empirical comparisons between full-and diagonal-matrix adaptive optimizers and SGD. In each synthetic experiment, we generated an ill-conditioned landscape, and compared SGD with adaptive optimizers, excluding the typical accompanying heuristics (i.e. no momentum, regularization, or learning rate schedule). We tested diagonal-matrix preconditioners with and without exponential gradient attenuation (like Adam and AdaGrad, respectively), and their full-matrix analogues. The experiments were robust with respect to the choice of ε (we used 10 −4) and batch size. In the first synthetic experiment (left), we exhibit an instance of logistic regression in dimension 10, with 10 3 samples generated from an extremely anisotropic (σ 2 max /σ 2 min ≈ 10 4) Gaussian distribution, and binary labels determined by a random hyperplane. SGD converges the slowest, and diagonal AdaGrad consistently accelerates optimization. Finally, full-matrix preconditioning (using cubic-time matrix inversion) converges the fastest. In this setting, adding a window improved convergence, but not drastically; we elaborate below. Next, we show an optimization problem (right) which accentuates the utility of exponentially decaying gradient memory. We consider the problem of minimizing the logarithmic barrier function of a randomly generated anisotropic polytope, otherwise known as finding its analytic center: this replaces the logistic loss terms with f i (w) = − log(w x i + c i), with x i generated the same way as above, and c i generated uniformly from. We observed the same ranking of convergence rates as in the first experiment, but the improvement afforded by the window was much clearer. The primary of our synthetic experiments is to demonstrate some small-scale settings in which adaptive regularization ameliorates anisotropy in the optimization landscape. A subtler point is that the windowed variants can help with changing curvature, even for convex losses. Note that the curvature of the former landscape is constant (in that its Hessian matrix at different locations w only changes by a scalar factor). The latter setting, in contrast, features a changing curvature (its Hessians do not commute in general), necessitating "forgetfulness" in adaptive curvature estimation. In Section 3.4, we will return to these proof-of-concept optimization instances, connecting them to an empirical study of curvature in more realistic landscapes. We investigated the training dynamics of GGT on a typical deep architecture for computer vision. For this, we used a 26-layer 3-branch residual network with Shake-Shake regularization, recently proposed in BID16. Aside from its ability to reach state-of-the-art classification accuracy, this architecture also features a relatively low parameter count (∼ 3M), enabling the use of a large window parameter (r = 200).In each experiment, we kept the cosine learning rate annealing schedule proposed in the paper, originally from BID29; performance degraded consistently and significantly with a fixed learning rate. For both Adam and GGT, we chose the commonly used parameters β 1 = 0.9, β 2 = 0.999, ε = 10 −8; for SGD, we used momentum with parameter 0.9. With correctly tuned RMSprop and Adadelta, with the same window parameters, training curves were virtually identical to those for Adam. We used the standard data augmentation techniques of 4-pixel padding + random cropping and horizontal flipping. Our are shown in FIG2. In terms of training loss, GGT consistently dominated existing optimizers. We corroborate a number of observations from previous empirical studies of the generalization of optimizers. Most prominently, we found that SGD generalized slightly better than all others BID48 towards the end of training, including ours. The gap (< 0.2%) is less dramatic than that seen in for two reasons: we only show curves with a tuned and annealed learning rate; also, we use an architecture with powerful explicit regularization techniques which have gained attention since their publication. Our preliminary observation is that GGT shrinks this gap slightly (corroborated by another experiment in Appendix B), and expect that there is vastly more empirical work to be done concerning architectures synergistically tuned to existing optimizers. We also verify the long-held empirical observation that the learning rate decay of AdaGrad is too aggressive (e.g. in ), ing in convergence to a poor solution. Finally, as noted in BID48, we find that using a sufficiently low learning rate for any optimizer can in a better training loss curve, but not without significantly degrading generalization (> 3% worse). Next, we move to recurrent architectures for language modeling. We train a 3-layer LSTM with ∼ 5M parameters for character-level modeling of the Penn Treebank dataset BID31. This is the setting in which we observe the most striking improvement over baselines. The particularities of this optimization task, and why it might be especially amenable to full-matrix regularization, remain a fruitful research direction BID38. FIG2 (bottom) shows training and validation perplexities for the first 50 epochs; no optimizer makes significant progress afterwards. The state of the art for character-level language modeling is less thoroughly documented than its word-level counterpart, though we note that our end-to-end (validation perplexity 2.42 after 500 epochs) is competitive with those reported for recurrent models, like by BID24. In contrast, Adam, AdaGrad, and SGD reach 2.51, 2.65, and 2.76, respectively. Note that Adam is the de facto standard optimizer for language modeling BID35. Even with iterations taking twice the time, we outperform all baselines in wall-clock time throughout training. We also tried using GGT as a drop-in replacement for Adam in the state-of-the-art word-level language modeling code accompanying BID36. Although we were competitive with Adam, we only observed an improvement in the first ∼ 20 epochs. We hypothesize that the advantage of full-matrix regularization in this setting is more marginal, as the gradients in the embedding layers are naturally sparse in the vocabulary ("one-hot") basis. On a similar note, we found that Adam outperformed GGT on attention-based architectures for NLP; refer to Appendix B for an experiment and discussion. In this section, we unify the insights gleaned from the synthetic experiments and deep learning benchmarks. Along the way, we provide some interesting anecdotal observations on the evolution of the preconditioner matrices' singular values. We plot the density of the spectrum of the low-rank preconditioner G t G t as training progresses. Since the fast implementation of GGT takes an eigendecomposition of G t G t, we can read off the distribution of eigenvalues during training at no additional computational cost. FIG3 visualizes the of this experiment for the CNN and RNN training settings from the previous two sections. In each case, we observe that G t G t has a condition number of ∼ 10 3, noting that this can be visualized as the vertical range in the logarithmic plot. This visualization affords a new way to see how CNN and RNN landscapes are fundamentally different: their gradient spectra evolve in very distinct ways over the course of training. Interestingly, the condition number of the CNN landscape surges near the end, which may be related to the the low-rank structure of well-trained nets noted by BID5, who derive rank-dependent generalization bounds for neural networks. On recurrent models, the rapidly evolving spectral structure at the early stage of training indicates a possibly more complex landscape. Intriguingly, the enormous condition number (∼ 10 6) correlates with the massive lead of GGT over the others, confirming our intuition that full-matrix preconditioning ameliorates anisotropy. To our knowledge, this is the first empirical study of this kind, using the covariance matrix of recent gradients as a surrogate to examining the changing curvature of the loss landscape. In the spirit of recent empirical lenses of this flavor BID41 BID26, we leave this as a way to visualize deep learning dynamics, possibly of independent exploratory interest. In this section we outline our analysis of GGT, for which we show convergence to an approximate first-order critical point, in some settings faster than SGD. To obtain the strongest theory, we analyze GGT with a "hard window" instead of exponentially decaying gradient memory, explained in Section A.2.We work in the usual theoretical framework of stochastic optimization of a differentiable non-convex function f (·), equipped with an unbiased variance-bounded stochastic gradient oracle ∇f (·). The objective, as is standard in the literature (see, e.g. BID17 ; BID4), is to find an ε-approximate stationary point x; that is, ∇f (x) ≤ ε. We quantify the improvement of adaptive regularization by its advantage over the usual worst-case bound of SGD. To this end, we define the adaptive ratio µ of an algorithm A as DISPLAYFORM0, where x A is the output of the A, and x * is a comparator. For convex optimization problems x * is naturally the global minimum. For non-convex optimization it is a subtler choice, which we detail in Appendix A.This ratio for the AdaGrad algorithm was shown in BID13 to be always bounded by a quantity independent of T, and potentially much smaller. Specifically, it was shown to be inversely proportional to the dimension in certain convex optimization problems, providing a theoretical justification for the speedup of adaptive optimizers. In Section A.4, we show a new, simple, and natural setting illustrating adaptive speedup, even for a strongly convex function f. We informally state the main theorem below. We defer the full bound without suppressed smoothness constants, as well as all technical proofs, to Appendix A. Theorem 4.1. Let f: R d → R be a bounded, Lipschitz, and smooth function with stochastic gradient oracle ∇f (·), whose variance is at most σ 2. In expectation, Algorithm 3 outputs an ε- DISPLAYFORM0 This theorem matches and potentially improves the known analysis for stochastic gradient descent with the introduction of the data-dependent adaptivity constant µ into the leading-order term governing the rate of convergence. Since BID13 bounded µ by a quantity independent of T, our theorem matches the classic O ε −4 rate of convergence. This work investigates full-matrix adaptive regularization: our main contribution is to make this technique viable for large-scale optimization, by a method for efficient multiplication by the inverse square root of a full second-moment matrix over a short window of gradients. This leads to a new algorithm, GGT, a truly scalable optimization algorithm with full-matrix adaptive preconditioning. Through synthetic experiments, we have shown that GGT accelerates optimization in ill-conditioned loss landscapes; this is supported by accompanying adaptive convergence guarantees. Preliminary experiments show accelerated convergence on standard deep learning benchmarks, with very different training dynamics from existing diagonal adaptive methods. We accompany our algorithm and experiments with the first theoretical characterization of the benefits of adaptive regularization in a non-convex setting. We hope that GGT will be the first of a new class of algorithms for the modern large-scale optimization toolbox, and to foster new discussion towards an ever-elusive understanding of loss landscapes in deep learning. In this section, we give the details on the theoretical treatment of GGT outlined in Section 4. The overall goal is to develop a theory for adaptive regularization in non-convex stochastic optimization. After formalizing the setting, we will define a version of GGT that uses a hard gradient memory window. This will allow us to transfer any insight on the advantage of adaptivity in the convex case to the non-convex case, giving rise to the main theorem. We will conclude this section by with an example illustrating the advantage of adaptive optimizers in the presence of sparse gradients. A.1 SETTING: STOCHASTIC NON-CONVEX OPTIMIZATION Theorem A.2 will provide a bound on the number of stochastic gradient calls required by GGT to achieve a first-order critical point. In particular, the theorem shows that GGT can converge to an approximate first-order critical point faster than SGD, with convergence rate controlled by the adaptive ratio µ, defined in.We consider the standard setting of stochastic optimization of a differentiable non-convex function f (·), equipped with a bounded-variance stochastic gradient oracle defined as follows. Definition A.1 (stochastic gradient oracle). Given a function f: D → R we call an oracle O f, a σ-bounded stochastic gradient oracle if for any x, O f returns a a random vector ∇f (x) such that DISPLAYFORM0 The objective, as is standard in non-convex optimization, is to find a first-order critical point, i.e. a point x for which ∇f (x) ≤ ε. We will also assume that f has a Lipschitz gradient; i.e. DISPLAYFORM1 Our algorithm makes a reduction to the case of stochastic convex optimization. The setting formally is that, given a smooth convex function and a σ-bounded stochastic gradient oracle, the algorithm's aim is to minimize the convex function f. Given any algorithm A we can now define the adaptive ratio of the algorithm, referred to as µ, as DISPLAYFORM2 where x A is the output of the algorithm A and x * ∈ argmin x f (x), with a total of at most T calls to the stochastic gradient oracle. µ captures the advantage in convergence rate obtained by the algorithm as compared to the error obtained by vanilla SGD, noting that the denominator is a bound on the error obtained by SGD in the same setting. A popular algorithm for stochastic (and in general online) convex optimization is AdaGrad BID13. Due to adaptive regularization, AdaGrad can often be advantageous over SGD. We quantify this advantage by the notion of µ defined above. The bounds of BID13 imply that µ can be as small as DISPLAYFORM3, depending on the geometry of the optimization problem. An example of this was provided by BID13 for both the diagonal and the full version of Adagrad. At the end of this section, we provide a different example which shows the same phenomenon even in the case of strongly convex functions. In the rest of this section we describe Algorithm 3, which uses AdaGrad (Algorithm 2) as a subroutine during each window. In this regard, while stating the bounds for our algorithms, we use µ as an upper bound on the advantage of AdaGrad in each iteration. As mentioned in Section 4, our analysis uses a slightly idealized version of GGT, which replaces the gradient memory mechanism (governed by w and β 2) with a hard window; i.e., the gradient buffer is reset every w steps. This simple modification enables us to develop a more informative theory, in which we benefit directly from the familiar theory of AdaGrad for convex optimization, while capturing the necessity of forgetting past gradient information in adaptive non-convex optimization. First, for clarity, we restate the definition of the full-matrix AdaGrad algorithm, introduced by BID13, which accumulates the second-moment matrix of all past gradients:Algorithm 2 AdaGrad for convex optimization BID13 1: Input: initializer x 1, window length w, stochastic gradient oracle ∇f (·), ε, η > 0. 2: for t = 1,..., w do 3:Receive stochastic gradient ∇f (x t). Let G t = [g t g t−1 . . . g 1], where g t:= ∇f (x t). Update x t+1 ← x t − η · εI + (G t G t) 1/2 −1 g t. 6: end for 7: Output: Average iterate DISPLAYFORM0 The final algorithm we analyze simply runs AdaGrad between restarts. Algorithm 3 GGT with a hard gradient window 1: Input: initializer x 1, time horizon T, window length w, λ > 0. 2: for t = 1 to T: do DISPLAYFORM1 Update x t+1 to be the output of Algorithm 2 on f t (x), starting at x t, for w steps. 5: end for 6: Output: Best iterate x t *, where t *:= argmin t≤T +1 ∇f (x t).The remaining discrepancies between Algorithm 3 and Algorithm 1 from the main paper are standard. We provide some references below.• Absence of first-moment estimation. Although it is customary to use nonzero β 1 (otherwise known as momentum) when applying Adam in practice, it is orthogonal to the effect of adaptive regularization in all established theory. In fact, the convergence rates given by (and fixed by BID42) contain only factors of 1/(1 − β 1), and are thus strongest when β 1 = 0.• Model averaging. Theoretical guarantees in online and stochastic convex optimization are most naturally stated on the average iterate; see BID40 BID13. Thus, we adopt the convention that Algorithm 2 returns the average iterate. We note that model averaging is a common regularization technique in practical non-convex settings, though not the default choice for adaptive optimizers in practice.• 2 regularization. The addition of the λ x − x t 2 term in Algorithm 3 is an artifact we introduce to obtain a tight analysis for hard-window GGT. It ensures that iterates in each window do not move too far, and allows us to analyze each window as a fixed convex program, so that we can use the convex theory of AdaGrad directly. The soft-window analogue would simply to be decrease the learning rate. Interestingly, a similar technique directly appears in the algorithm proposed by BID3. Finally, we note that from a σ-bounded stochastic gradient oracle for f, it is trivial to construct one for f t, by adding −2λx t (deterministically). Theorem A.2. Consider a non-convex function f, such that for all x, ∇ 2 f (x) 2 ≤ L and a point DISPLAYFORM0 Further, suppose we have access to a σ-bounded DISPLAYFORM1. Then the point x returned by Algorithm 3 is such that E ∇f (x) ≤ ε, where µ = max t∈[T] µ t and µ t is the adaptive ratio when run on f t (as defined in FORMULA9). Further, note that choosing λ = 3L/2, the total number of stochastic gradient calls to the oracle O f, made by the algorithm is bounded by T · w = FIG2. Top: CIFAR-10 classification with a 3-branch ResNet. Bottom: PTB character-level language modeling with a 3-layer LSTM. We present some additional large-scale empirical studies in FIG5. To demonstrate a vision task with a harder optimization landscape, we use GGT to train a 19-layer "vanilla" convolutional network (VGGNet, BID44), without residual connections or batch normalization, on the same CIFAR-10 classification task. Here, we recover the same insights as found by BID48, in which diagonal-matrix adaptive methods can fail to train a network dramatically. Here, unlike diagonal-matrix adaptive optimizers, GGT stays on par with SGD throughout training, with a ∼ 1% gap remaining in generalization at the end. We use a standard fixed halving learning rate schedule; it is clear here that in the initial epochs after decaying the learning rate, GGT trains the most rapidly. We leave a careful investigation of leveraging this phenomenon, and tuning GGT's learning rate schedule, to future work. A recent significant advancement on many NLP tasks, including language modeling, is the introduction of attention-based models. We investigate the behavior of GGT on a Transformer network BID46, on the same Penn Treebank character-level language modeling task. Here, after an initial lead, GGT is outperformed by Adam in training and validation loss. The value of using gradient correlations to assist in the training of attention models seems to be limited.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkxd2oR9Y7
fast, truly scalable full-matrix AdaGrad/Adam, with theory for adaptive stochastic non-convex optimization
Dialogue systems require a great deal of different but complementary expertise to assist, inform, and entertain humans. For example, different domains (e.g., restaurant reservation, train ticket booking) of goal-oriented dialogue systems can be viewed as different skills, and so does ordinary chatting abilities of chit-chat dialogue systems. In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). The experimental show that this approach achieves competitive performance on a combined dataset of MultiWOZ , In-Car Assistant , and Persona-Chat . Finally, we demonstrate that each dialogue skill is effectively learned and can be combined with other skills to produce selective responses. Unlike humans who can do both, goal-oriented dialogues and chit-chat conversations (a;) are often learned with separate models. A more desirable approach for the users would be to have a single chat interface that can handle both casual talk and tasks such as reservation or scheduling. This can be formulated as a problem of learning different conversational skills across multiple domains. A skill can be either querying a database, generating daily conversational utterances, or interacting with users in a particular task-domain (e.g. booking a restaurant). One challenge of having multiple skills is that existing datasets either focus only on chit-chat or on goal-oriented dialogues. This is due to the fact that traditional goal-oriented systems are modularized (; ; ; ;); thus, they cannot be jointly trained with end-to-end architecture as in chit-chat. However, recently proposed end-to-end trainable models;; ) and datasets (; allow us to combine goal-oriented (; and chit-chat into a single benchmark dataset with multiple conversational skills as shown in Table 1. A straight forward solution would be to have a single model for all the conversational skills, which has shown to be effective to a certain extent by and . Putting aside the performance in the tasks, such fixed shared-parameter framework, without any task-specific designs, would lose controllability and interpretability in the response generation. In this paper, instead, we propose to model multiple conversational skills using the Mixture of Experts (MoE) paradigm, i.e., a model that learns and combine independent specialized experts using a gating function. For instance, each expert could specialize in different dialogues domains (e.g., Hotel, Train, ChitChat etc.) and skills (e.g., generate SQL query). A popular implementation of MoE ) uses a set of linear transformation (i.e., experts) in between two LSTM layers. However, several problems arise with this implementation: 1) the model is computationally expensive as it has to decode multiple times each expert and make the combination at the representation-level; 2) no prior knowledge is injected in the expert selection (e.g., domains); 3) Seq2Seq model has limited ability in extracting information from a Knowledge Base (KB) (i.e., generated by the SQL query), as required in end-to-end task-oriented dialogues Table 1: An example from the dataset which includes both chit-chat and task-oriented conversations. The model has to predict all the Sys turn, which includes SQL query and generating response from a the Memory content, which is dynamically updated with the queries . The skills are the prior knowledge needed for the response, where Persona refers to chit-chat. Spk. Conversation Skills Usr: Can you help me find a cheap 2 star hotel? SELECT * FROM hotel WHERE pricerange='cheap' AND stars=2 AND type='hotel' SQL+HOTEL Mem: (Result table from . The latter can be solved by using more advanced multi-hop models like the Transformer, but the remaining two need to be addressed. Hence, in this paper we: • propose a novel Transformer-based architecture called Attention over Parameters (AoP). This model parameterize the conversational skills of end-to-end dialogue systems with independent decoder parameters (experts), and learns how to dynamically select and combine the appropriate decoder parameter sets by leveraging prior knowledge from the data such as domains and skill types; • proof that AoP is algorithmically more efficient (Appendix A1) compared to forwarding all the Transformer decoders and then mix their output representation, like is normally done in MoE. Figure 1 illustrates the high-level intuition of the difference; • empirically show the effectiveness of using specialized parameters in a combined dataset of Multi-WOZ , In-Car Assistant, and Persona-Chat , which to the best of our knowledge, is the first evaluation of this genre i.e. end-to-end large-scale multi-domains/skills. Moreover, we show that our model is highly interpretable and is able to combine different learned skills to produce compositional responses. Dialogue Task-oriented dialogue models can be categorized in two types: modulebased (; ; ; ;) and end-to-end. In this paper, we focus on the latter which are systems that train a single model directly on text transcripts of dialogues. These tasks are tackled by selecting a set of predefined utterances (; ; ;) or by generating a sequence of tokens; b;; ). Especially in the latter, copy-augmented models;; ) are very effective since extracting entities from a knowledge base is fundamental. On the other hand, end-to-end open domain chit-chat models have been widely studied (a; ;). Several works improved on the initially reported baselines with various methodologies (; ; ; ; ; ;). Finally, was the first attempt of having an end-to-end system for both task-oriented models and chit-chat. However, the dataset used for the evaluation was small, evaluated only in single domain, and the chit-chat ability was added manually through rules. The idea of having specialized parameters, or so-called experts, has been widely studied topics in the last two decades . For instance, different architecture and methodologies have been used such as SVM , Gaussian Processes (; ;), Dirichlet Processes , Hierarchical Experts , Infinity Number of Experts , and sequential expert addition . More recently, the Mixture Of Expert model was proposed which added a large number of experts between two LSTMs . To the best of our knowledge, none of these previous works applied the of the gating function to the parameters itself. On the other hand, there are Conditional Computational models which learn to dynamically select their computation graph . Several methods have been used such as reinforcement learning , a halting function (; ;), by pruning and routing/controller function . However, this line of work focuses more on optimizing the inference performance of the model more than specializing parts of it for computing a certain task. Multi-task Learning Even though our model processes only input sequence and output sequences of text, it actually jointly learns multiple tasks (e.g. SQL and BOOK query, memory retrieval, and response generation), thus it is also related to multi-task learning . Interested readers may refer to for a general overview on the topic. In Natural Language Processing, multi-task learning has been applied in a wide range of applications such as parsing (; ;, machine translation in multiple languages , and parsing image captioning and machine translation . More interestingly, DecaNLP has a large set of tasks that are cast to question answering (QA), and learned by a single model. In this work, we focus more on conversational data, but in future works, we plan to include these QA tasks. We use the standard encoder-decoder architecture and avoid any task-specific designs; ), as we aim to build a generic conversation model for both chit-chat and task-oriented dialogues. More specifically, we use a Transformer for both encoder and decoder. Let us define the sequence of tokens in the dialogue history as D = {d 1, . . ., d m} and the dynamic memory content as a sequence of tokens M = {m 1, . . ., m z}. The latter can be the of a SQL query execution (e.g., table) or plain texts (e.g., persona description), depending on the task. The dialogue history D and the memory M are concatenated to obtain the final input denoted by We then denote Y = {y 1, . . ., y k} as the sequence of tokens that the model is expected to produce. Without loss of generality, Y can be both plain text and SQL-like queries. Hence, the model has to learn when to issue database queries and when to generate human-like responses. Finally, we define a binary skill vector V = {v 1, . . ., v r} that specifies the type of skills required to generate Y. This can be considered as a prior vector for learning to select the correct expert during the training 1. For example, in Table 1 the first response is of type SQL in the Hotel domain, thus the skill vector V will have v SQL = 1 and v Hotel = 1, while all the other skill/domains are set to zero 2. More importantly, we may set the vector V to have multiple ones to enforce the model to compose skills to achieve a semantic compositionality of different experts. To map the input sequence to the output sequence, we use a standard Transformer and denote the encoder and decoder as TRS enc and TRS dec, respectively. The input of a Transformer is the embedded representation of the input words; thus, we define a word embedding matrix E ∈ R d×|V | where d is the embedding size and |V | is the cardinality of the vocabulary. The input X, with its positional embedding (Appendix A2 for more information), are encoded as the following equation: where H ∈ R d model ×n, and E. Then the decoder receives the target sequence shifted by one Y:k−1 = {<SOS>, y 1, . . ., y k} as the input. Using teacher-forcing , the model is trained to produce the correct sequence Y. The output of the decoder is produced as follow: where O ∈ R d model ×k. Finally, a distribution over the vocabulary is generated for each token by an affine transformation W ∈ R d model ×|V | followed by a Softmax function. In addition, P (Y |X) is mixed with the encoder-decoder attention distribution to enable to copy token from the input sequence as in . The model is then trained to minimize a standard cross entropy loss function and at inference time to generate one token at the time in an auto-regressive manner . Hence, the training loss is defined as: The main idea is to produce a single set of parameters for decoder TRS dec by the weighted sum of r independently parameterized decoders. This process is similar to attention where the memories are the parameters and the query is the encoded representation. Let us define Θ = [θ 1, . . ., θ r] as the list of parameters for r decoders, since a TRS dec is represented by its parameters θ. Since each θ can be sized in the order of millions, we assign the corresponding key vectors to each θ, similar to key-value memory networks . Thus, we use a key matrix K ∈ R d model ×r and a Recurrent Neural Networks (RNN), in this instance a GRU , to produce the query vector by processing the encoder output H. The attention weights for each decoders' parameters is computed as follow: where q ∈ R d model and α ∈ R r is the attention vectors where each α i is the score corresponding to θ i. Hence, the new set of parameters is computed as follow: The combined set of parameters θ * are then used to initialize a new TRS dec, and Equation 2 will be applied to the input based on this. Equation 6 is similar to the gating function proposed 1 the vector V will be absent during the testing 2 With the assumption that at each index in V is assigned a semantic skill (e.g. SQL position i) in ), but the ing scoring vector α is applied directly to the parameter instead of the output representation of each decoder. This indeed makes the computation algorithmically faster since the forward computation is done only once and summing r elements of size |θ| is linear compared to forwarding r multiplications with the input. Interested readers may refer to Appendix A1 for the proof. Importantly, if we apply α to each of the output representation O i generated by the TRS i dec, we end up having a Transformer-based implementation of MoE 3. We call this model as Attention over Representation (AoR). Moreover, an additional loss term is used to supervise the attention vector α by using the prior knowledge vector V. Since multiple decoder parameters can be selected at the same time, we use a binary cross-entropy to train each α i. Thus a second loss is defined as: The final loss is the summation of L P (Y |X) and L V. Finally, in AoP, but in general in the MoE framework, stacking multiple layers (e.g., Transformer) leads to models with a large number of parameters, since multiple experts are repeated across layers. An elegant workaround is the Universal Transformer , which loops over an unique layer and, as shown by , holds similar or better performance than a multi-layer Transformer. In our experiment, we report a version of AoP that uses this architecture, which for instance does not add any further parameter to the model. To evaluate the performance of our model for different conversational skills, we propose to combine three publicly available datasets: MultiWOZ , Stanford Multi-domain Dialogue and Persona-Chat (MultiWOZ (MWOZ) is a human-to-human multi-domain goal-oriented dataset annotated with dialogue acts and states. In this dataset, there are seven domains (i.e., Taxi, Police, Restaurant, Hospital, Hotel, Attraction, Train) and two APIs interfaces: SQL and BOOK. The former is used to retrieve information about a certain domain and the latter is used to book restaurants, hotels, trains, and taxis. We refine this dataset to include SQL/BOOK queries and their outputs using the same annotations schema as . Hence, each response can either be plain text conversation with the user or SQL/BOOK queries, and the memory is dynamically populated with the from the queries as the generated response is based on such information. This transformation allows us to train end-to-end models that learns how and when to produce SQL queries, to retrieve knowledge from a dynamic memory, and to produce plain text response. A detailed explanation is reported in Appendix A4, together with some samples. Stanford Multi-domain Dialogue (SMD) is another human-to-human multi-domain goal-oriented dataset that is already designed for end-to-end training. There are three domains in this dataset (i.e., Point-of-Interest, Weather, Calendar). The difference between this dataset and MWOZ is that each dialogue is associated with a set of records relevant to the dialogues. The memory is fixed in this case so the model does not need to issue any API calls. However, retrieving the correct entities from the memory is more challenging as the model has to compare different alternatives among records. Persona-Chat is a multi-turn conversational dataset, in which two speakers are paired and different persona descriptions (4-5 sentences) are randomly assigned to each of them. For example, "I am an old man" and "I like to play football" are one of the possible persona descriptions provided to the system. Training models using this dataset in a more persona consistent and fluent conversation compared to other existing datasets . Currently, this dataset has become one of the standard benchmarks for chit-chat systems, thus, we include it in our evaluation. For all three datasets, we use the training/validation/test split provided by the authors and we keep all the real entities in input instead of using their delexicalized version as in . This makes the task more challenging, but at the same time more interesting since we force the model to produce real entities instead of generic and frequent placeholders. Goal-Oriented For both MWOZ and SMD, we follow the evaluation done by existing works;. We use BLEU 4 score to measure the response fluency and Entity F1-Score to evaluates the ability of the model to generate relevant entities from the dynamic memory. Since MWOZ also includes SQL and BOOK queries, we compute the exact match accuracy (i.e., ACC SQL and ACC BOOK) and BLEU score (i.e., BLEU SQL and BLEU BOOK). Furthermore, we also report the F1-score for each domain in both MWOZ and SMD. Chit-Chat We compare perplexity, BLEU score, F1-score , and Consistency score of the generate sentences with the human-generated prediction. The Consistency score is computed using a Natural Language Inference (NLI) model trained on dialogue NLI , a recently proposed corpus based on Persona dataset. We fine-tune a pre-trained BERT model using the dialogue DNLI corpus and achieve a test set accuracy of 88.43%, which is similar to the best-reported model in . The consistency score is defined as follow: where u is a generated utterance and p j is one sentence in the persona description. , the authors showed that by re-ranking the beam search hypothesis using the DNLI score (i.e., C score), they achieved a substantial improvement in dialogue consistency. Intuitively, having a higher consistency C score means having a more persona consistent dialogue response. In our experiments, we compare Sequence-to-Sequence (Seq2Seq) , Transformer (TRS), Mixture of Expert (MoE) and Attention over Representation (AoR) with our proposed Attention over Parameters (AoP). In all the models, we used the same copy-mechanism as in . In AoR instead of mixing the parameters as in Equation 7, we mix the output representation of each transformer decoder (i.e. Equation 2). For all AoP, AoR, and MoE, r = 13 is the number of decoders (experts): 2 skills of SQL and BOOK, 10 different domains for MWOZ+SMD, and 1 for Persona-Chat. Furthermore, we include also the following experiments: AoP that uses the gold attention vector V, which we refer as AoP w/ Oracle (or AoP + O); AoP trained by removing the L V from the optimization (AoP w/o L V); and as aforementioned, the Universal Transformer for both AoP (AoP + U) and the standard Transformer (TRS + U) (i.e., 6 hops). All detailed model description and the full set of hyper-parameters used in the experiments are reported in Appendix A5. Table 2 and Table 3 show the respectively evaluation in MWOZ+SMD and Persona-Chat datasets. From Table 2, we can identify four patterns. 1) AoP and AoR perform consistently better then other baselines which shows the effectiveness of combining parameters by using the correct prior V; 2) AoP performs consistently, but marginally, better than AoR, with the advantage of an algorithmic faster inference; 3) Using Oracle (AoP+O) gives the highest performance in all the measures, which shows the performance upper-bound for AoP. Hence, the performance gap when not using oracle attention is most likely due to the error in attention α (i.e., 2% error rate). Moreover, Table 2 shows that by removing L V (AoP w/o L V) the model performance decreases, which confirms that good inductive bias is important for learning how to select and combine different parameters (experts). Additionally, in Appendix A6, we report the per-domain F1-Score for SQL, BOOK and sentences, and Table 3 and Table 2 with the standard deviation among the three runs. Furthermore, from Table 3, we can notice that MoE has the lowest perplexity and F1-score, but AoP has the highest Consistency and BLUE score. Notice that the perplexity reported in is lower since the vocabulary used in their experiments is smaller. In general, the difference in performance among different models is marginal except for the Consistency score; thus, we can conclude that all the models can learn this skill reasonably well. Consistently with the previous , when L V is removed from the optimization, the models' performance decreases. Finally, in both Table 2 and Table 3, we report the obtained by using the Universal Transformer, for both AoP and the Transformer. By adding the layer recursion, both models are able to consistently improve all the evaluated measures, in both Persona-Chat and the Task-Oriented tasks. Especially AoP, which achieves better performance than Oracle (i.e. single layer) in the SQL accuracy, and a consistently better performance in the Persona-Chat evaluation. To demonstrate the effectiveness of our model in learning independent skills and composing them together, we manually trigger skills by modifying α and generate 14 different responses for the same input dialogue context. This experiment allows us to verify whether the model accurately captures the meaning of each skill and whether it can properly learn to compose the selected parameters (skills). Table 3 first shows the dialogue history along with the response of AoP on the top, and then different responses generated by modifying α (i.e., black cells correspond to 1 in the vector, while the whites are 0). By analyzing Table 3 5 we can notice that: • The model learns the correct semantics of each skill. For instance, the AoP response is of type SQL and Train, and by deactivating the SQL skill and activating other domain-skills, including Train, we can see that the responses are grammatical and they are coherent with the selected skill semantics. For instance, by just selecting Train, the generated answer becomes "what time would you like to leave?" which is coherent with the dialogue context since such information has not been yet provided. Interestingly, when Persona skill is selected, the generated response is conversational and also coherent with the dialogue, even though it is less fluent. • The model effectively learns how to compose multiple skills. For instance, when SQL or BOOK are triggered the response produces the correct SQL-syntax (e.g. "SELECT * FROM .." etc.). By also adding the corresponding domain-skill, the model generates the correct query format and attributes relative to the domain type (e.g. in SQL, Restaurant, the model queries with the relevant attribute food for restaurants). In this paper, we propose a novel way to train a single end-to-end dialogue model with multiple composable and interpretable skills. Unlike previous work, that mostly focused on the representationlevel mixing, our proposed approach, Attention over Parameters, learns how to softly combine independent sets of specialized parameters (i.e., making SQL-Query, conversing with consistent persona, etc.) into a single set of parameters. By doing so, we not only achieve compositionality and interpretability but also gain algorithmically faster inference speed. To train and evaluate our model, we organize a multi-domain task-oriented datasets into end-to-end trainable formats and combine it with a conversational dataset (i.e. Persona-Chat). Our model learns to consider each task and domain as a separate skill that can be composed with each other, or used independently, and we verify the effectiveness of the interpretability and compositionality with competitive experimental and thorough analysis. A.1 COMPUTATIONAL COST AOP Corollary A.0.1. The computation cost of Attention over Parameters (AoP) is always lower than Mixture Of Experts (MoE) as long as the processed sequence is longer than 1. Proof. Let f θ: R d → R n a generic function parametrized by θ. Without loss of generality, we define θ as a affine transformation W ∈ R d×n. Let X ∈ R t×d a generic input sequence of length t and d dimensional size. Let the set F = [f θ1, · · ·, f θr] be the set of r experts. Hence, the operation done by MoE are: 10) Thus the computational cost in term of operation is O(rtdn + rtn) since the cost of f θi (X) is O(tdn) and it is repeated r times, and the cost of summing the representation is O(rtn). On the other hand, the operation done by AoP are: in this case the computational cost in term of operation is O((r + t)dn) since the cost of summing the parameters is O(rdn) and the cost of f θ * is O(tdn). Hence, it is easy to verify that if t > 1 then: Furthermore, the assumption of using a simple affine transformation W is actually an optimal case. Indeed, assuming that the cost of parameters sum is equal to the number of operation is optimistic, for instance already by using attention the number of operations increases but the number of parameters remains constant. Since the model input may include structured data (e.g. DB records) we further define another embedding matrix for encoding the types and the segments as P ∈ R d×|S| where S is the set of positional tokens and |S| its cardinality. P is used to inform the model of the token types such as speaker information (e.g. Sys and Usr), the data-type for the memory content (e.g. Miles, Traffic etc.), and segment types like dialogue turn information and database record index . Figure 4 shows an example of the embedded representation of the input. Hence, we denote X T and X R as the type and segment tokens for each token in input X, respectively. Figure 5 shows the attention vector α over parameters for different generated sentences. In this figure, and by analyzing more examples 5, we can identify two patterns: • AoP learns to focus on the correct skills (i.e., SQL, BOOK) when API-calls are needed. From the first example in Figure 5, we can see that the activations in α are consistent with those in the correct attention vector P. There are also false positives, in which AoP puts too high weights on BOOK when the correct response is plain text that should request more information from the user (i.e., i can help you with that. when would you like to leave the hotel?). However, we can notice that this example is, in fact, "almost correct" as triggering a booking API call may also be considered a valid response. Meanwhile, the third example also fails to attend to the correct skill, but, in fact, generates a very fluent and relevant response. This is most likely because the answer is simple and generic. • The attention often focuses on multiple skills not directly relevant to the task. We observe this pattern especially when there are other skill-related entities mentioned in the context or the response. For example, in the second dialog example in Figure 5, we can notice that AoP not only accurately focuses on taxi domain, but also has non-negligible activations for restaurant and hotel. This is because the words "hotel" and "restaurant" are both mentioned in the dialogue context and the model has to produce two entities of the same type (i.e. finches bed and breakfast and ask). As mentioned in the main article, we convert MultiWOZ into an end-to-end trainable dataset. This requires to add sql-syntax queries when the system includes particular entities. To do so we leverage two annotations such as the state-tracker and the speech acts. The first is used to generate the a well-formed query, including key and attribute, the second instead to decide when to include the query. More details on the dialogue state-tracker slots and slots value, and the different speech acts can be found in . A query is create by the slots, and its values, that has been updated in the latest turn. The SQL query uses the following syntax: Similarly for the booking api BOOK the syntax is the following: In both cases the slot values are kept as real entities. More challenging is to decide when to issue such apis. Speech acts are used to decide by using the "INFORM-DOMAIN" and "RECOMMEND-DOMAIN" tag. Thus any response that include those speech tag will trigger an api if and only if: • there has been a change in the state-tracker from the previous turn • the produced query has never been issued before By a manual checking, this strategy to be effective. However, as reported by the speech act annotation includes some noise, which is reflected also into our dataset. The from the SQL query can be of more that 1K records with multiple attributes. Following we use the following strategy: • If no speech act INFORM or RECOMMEND and the number of records are more than 5, we use a special token in the memory < T M >. • If no speech act INFORM or RECOMMEND and the number of records are less or equal than 5, we put all the records in memory. • If any speech act INFORM or RECOMMEND, we filter the records to include based on the act value. Notice that this is a fair strategy, since all the ing record are correct possible answers and the annotators pick-up on of the record randomly . Notice that the answer of a booking call instead, is only one record containing the booking information (e.g. reference number, taxi plate etc.) or "Not Available" token in case the booking cannot made. We used a standard Transformer architecture with pre-trained Glove embedding . For the both Seq2Seq and MoE we use Adam optimizer with a learning rate of 1 × 10 −3, where instead for the Transformer we used a warm-up learning rate strategy as in. In both AoP and AoR we use an additional transformer layer on top the output of the model. Figure 6,7,8 shows the high level design MoE, AoR and AoP respectively. In all the model we used a batch size of 16, and we early stopped the model using the Validation set. All the experiments has been conducted using a single Nvidia 1080ti. We used a small grid-search for tuning each model. The selected hyper-parameters are reported in Table 4, and we run each experiment 3 times and report the mean and standard deviation of each . model consist of r feed-forward neural network (experts) which are embedded between two LSTM layers, a trainable gating network to select experts. Figure 7: Attention over Representation (AoR) consist of a transformer encoder which encode the source input and compute the attention over the skills. Then r transformer decoder layers computes r specialized representation and the output response is generated based on the weighted sum the representation. In the figure, we omitted the output layer. Figure 8: Attention over Parameters (AoP) consist of a transformer encoder which encode the source input and compute the attention over the skills. Then, r specialized transformer decoder layers and a dummy transformer decoder layer parameterized by the weighted sum of the r specialized transformer decoder layers parameters. In the figure, we omitted the output layer.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJepraEFPr
In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP).
Model distillation aims to distill the knowledge of a complex model into a simpler one. In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to the original performance, given a fixed network initialization. We evaluate our method in various initialization settings. Experiments on multiple datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, demonstrate the ad-vantage of our approach compared to alternative methods. Finally, we include a real-world application of dataset distillation to the continual learning setting: we show that storing distilled images as episodic memory of previous tasks can alleviate forgetting more effectively than real images. proposed network distillation as a way to transfer the knowledge from an ensemble of many separately-trained networks into a single, typically compact network, performing a type of model compression. In this paper, we are considering a related but orthogonal task: rather than distilling the model, we propose to distill the dataset. Unlike network distillation, we keep the model fixed but encapsulate the knowledge of the entire training dataset, which typically contains thousands to millions of images, into a small number of synthetic training images. We show that we can go as low as one synthetic image per category, training the same model to reach surprisingly good performance on these synthetic images. For example, in Figure 1a, we compress 60, 000 training images of MNIST digit dataset into only 10 synthetic images (one per category), given a fixed network initialization. Training the standard LENET on these 10 images yields test-time MNIST recognition performance of 94%, compared to 99% for the original dataset. For networks with unknown random weights, 100 synthetic images train to 89%. We name our method Dataset Distillation and these images distilled images. But why is dataset distillation interesting? First, there is the purely scientific question of how much data is encoded in a given training set and how compressible it is? Second, we wish to know whether it is possible to "load up" a given network with an entire dataset-worth of knowledge by a handful of images. This is in contrast to traditional training that often requires tens of thousands of data samples. Finally, on the practical side, dataset distillation enables applications that require compressing data with its task. We demonstrate that under the continual learning setting, storing distilled images as memory of past task and data can alleviate catastrophic forgetting . A key question is whether it is even possible to compress a dataset into a small set of synthetic data samples. For example, is it possible to train an image classification model on synthetic images that are not on the manifold of natural images? Conventional wisdom would suggest that the answer is no, as the synthetic training data may not follow the same distribution of the real test data. Yet, in this work, we show that this is indeed possible. We present an optimization algorithm for synthesizing a small number of synthetic data samples not only capturing much of the original training data but also tailored explicitly for fast model training with only a few data point. To achieve our goal, we first derive the network weights as a We distill the knowledge of tens of thousands of images into a few synthetic training images called distilled images. On MNIST, 100 distilled images can train a standard LENET with a random initialization to 89% test accuracy, compared to 99% when fully trained. On CIFAR10, 100 distilled images can train a network with a random initialization to 41% test accuracy, compared to 80% when fully trained. In Section 3.6, we show that these distilled images can efficiently store knowledge of previous tasks for continual learning. differentiable function of our synthetic training data. Given this connection, instead of optimizing the network weights for a particular training objective, we optimize the pixel values of our distilled images. However, this formulation requires access to the initial weights of the network. To relax this assumption, we develop a method for generating distilled images for randomly initialized networks. To further boost performance, we propose an iterative version, where the same distilled images are reused over multiple gradient descent steps so that the knowledge can be fully transferred into the model. Finally, we study a simple linear model, deriving a lower bound on the size of distilled data required to achieve the same performance as training on the full dataset. We demonstrate that a handful of distilled images can be used to train a model with a fixed initialization to achieve surprisingly high performance. For networks pre-trained on other tasks, our method can find distilled images for fast model fine-tuning. We test our method on several initialization settings: fixed initialization, random initialization, fixed pre-trained weights, and random pre-trained weights. Extensive experiments on four publicly available datasets, MNIST, CIFAR10, PASCAL-VOC, and CUB-200, show that our approach often outperforms existing methods. Finally, we demonstrate that for continual learning methods that store limited-size past data samples as episodic memory , storing our distilled data instead is much more effective. Our distilled images contain richer information about the past data and tasks, and we show experimental evidence on standard continual learning benchmarks. Our code, data, and models will be available upon publication. Knowledge distillation. The main inspiration for this paper is network distillation , a widely used technique in ensemble learning and model compression (; ;). While network distillation aims to distill the knowledge of multiple networks into a single model, our goal is to compress the knowledge of an entire dataset into a few synthetic data. Our method is also related to the theoretical concept of teaching dimension, which specifies the minimal size of data needed to teach a target model to a learner . However, methods (; 2015) inspired by this concept require the existence of target models, which our method does not. Dataset pruning, core-set construction, and instance selection. Another way to distill knowledge is to summarize the entire dataset by a small subset, either by only using the "valuable" data for model training (; ;) or by only labeling the "valuable" data via active learning . Similarly, core-set construction (; ; ;) and instance selection (Olvera-López et al., 2010) methods aim to select a subset of the entire training data, such that models trained on the subset will perform as well as the model trained on the full dataset. For example, solutions to many classical linear learning algorithms, e.g., Perceptron and SVMs , are weighted sums of subsets of training examples, which can be viewed as core-sets. However, algorithms constructing these subsets require many more training examples per category than we do, in part because their "valuable" images have to be real, whereas our distilled images are exempt from this constraint. Gradient-based hyperparameter optimization. Our work bears similarity with gradient-based hyperparameter optimization techniques, which compute the gradient of hyperparameter w.r.t. the final validation loss by reversing the entire training procedure (; ; ;). We also backpropagate errors through optimization steps. However, we use only training set data and focus more heavily on learning synthetic training data rather than tuning hyperparameters. To our knowledge, this direction has only been slightly touched on previously . We explore it in greater depth and demonstrate the idea of dataset distillation in various settings. More crucially, our distilled images work well across random initialization weights, not possible by prior work. Understanding datasets. Researchers have presented various approaches for understanding and visualizing learned models (; ; ; ;). Unlike these approaches, we are interested in understanding the intrinsic properties of the training data rather than a specific trained model. Analyzing training datasets has, in the past, been mainly focused on the investigation of bias in datasets . For example, proposed to quantify the "value" of dataset samples using cross-dataset generalization. Our method offers a different perspective for understanding datasets by distilling full datasets into a few synthetic samples. Given a model and a dataset, we aim to obtain a new, much-reduced synthetic dataset which performs almost as well as the original dataset. We first present our main optimization algorithm for training a network with a fixed initialization with one gradient descent (GD) step (Section 3.1). In Section 3.2, we derive the resolution to a more challenging case, where initial weights are random rather than fixed. In Section 3.3, we further study a linear network case to help readers understand both the properties and limitations of our method. We also discuss the distribution of initial weights with which our method can work well. In Section 3.4, we extend our approach to reuse the same distilled images over 2, 000 gradient descent steps and largely improve the performance. Finally, Section 3.5 discusses dataset distillation for different initialization distributions. Finally, in Section 3.6, we show that our distilled images can be used as effective episodic memory for continual learning tasks., we parameterize our neural network as θ and denote (x i, θ) as the loss function that represents the loss of this network on a data point x i. Our task is to find the minimizer of the empirical error over entire training data: where for notation simplicity we overload the (·) notation so that (x, θ) represents the average error of θ over the entire dataset. We make the mild assumption that is twice-differentiable, which holds true for the majority of modern machine learning models and tasks. Standard training usually applies minibatch stochastic gradient descent or its variants. At each step t, a minibatch of training data x t = {x t,j} n j=1 is sampled to update the current parameters as where η is the learning rate. Such a training process often takes tens of thousands or even millions of update steps to converge. Instead, we learn a tiny set of synthetic distilled training datax with M N and a corresponding learning rateη so that a single GD step such as Input: p(θ0): distribution of initial weights; M: the number of distilled data Input: α: step size; n: batch size; T: the number of optimization iterations;η0: initial value forη either from N (0, I) or from real training images. Initializeη ←η0 2: for each training step t = 1 to T do 3: Get a minibatch of real training data xt = {xt,j} n j=1 4: Sample a batch of initial weights θ for each sampled θ Compute updated parameter with GD: θ Evaluate the objective function on real training data: 10: end for Output: distilled datax and optimized learning rateη using these learned synthetic datax can greatly boost the performance on the real test set. Given an initial θ 0, we obtain these synthetic datax and learning rateη by minimizing the objective below L: where we derive the new weights θ 1 as a function of distilled datax and learning rateη using Equation 2 and then evaluate the new weights over all the real training data x. The loss L(x,η; θ 0) is differentiable w.r.t.x andη, and can thus be optimized using standard gradient-based methods. In many classification tasks, the data x may contain discrete parts, e.g., class labels in data-label pairs. For such cases, we fix the discrete parts rather than learn them. Unfortunately, the above distilled data is optimized for a given initialization, and does not generalize well to other initializations, as it encodes the information of both the training dataset x and a particular network initialization θ 0. To address this issue, we turn to calculate a small number of distilled data that can work for networks with random initializations from a specific distribution. We formulate the optimization problem as follows: where the network initialization θ 0 is randomly sampled from a distribution p(θ 0). During our optimization, the distilled data are optimized to work well for randomly initialized networks. In practice, we observe that the final distilled data generalize well to unseen initializations. In addition, these distilled images often look quite informative, encoding the discriminative features of each category (e.g., in Figure 2). Algorithm 1 illustrates our main method. As the optimization (Equation 4) is highly non-linear and complex, the initialization ofx plays a critical role in the final performance. We experiment with different initialization strategies and observe that using random real images as initialization often produces better distilled images compared to random initialization, e.g., N (0, I). For a compact set distilled data to be properly learned, it turns out having only one GD step is far from sufficient. Next, we derive a lower bound on the size of distilled data needed for a simple model with arbitrary initial θ 0 in one GD step, and discuss its implications on our algorithm. This section studies our formulation in a simple linear regression problem with quadratic loss. We derive a lower bound of the size of distilled data needed to achieve the same performance as training on the full dataset for arbitrary initialization with one GD step. Consider a dataset x containing N data-target pairs, where d i ∈ R D and t i ∈ R, which we represent as two matrices: an N × D data matrix d and an N × 1 target matrix t. Given the mean squared error metric and a D × 1 weight matrix θ, we have We aim to learn M synthetic data-target pairsx = (d,t), whered is an M × D matrix,t an M × 1 matrix (M N), andη the learning rate, to minimize (x, θ 0 −η∇ θ0 (x, θ 0)). The updated weight matrix after one GD step with these distilled data is For the quadratic loss, there always exists distilled datax that can achieve the same performance as training on the full dataset x (i.e., attaining the global minimum) for any initialization θ 0. For example, given any global minimum solution θ *, we can choosed = N · I andt = N · θ *. But how small can the size of the distilled data be? For such models, the global minimum is attained at any θ * in the condition above, we have Here we make the mild assumption that the feature columns of the data matrix d are independent (i.e., d T d has full rank). For ax = (d,t) to satisfy the above equation for any θ 0, we must have which implies thatd Td has full rank and M ≥ D. Discussion. The analysis above only considers a simple case but suggests that any small number of distilled data fail to generalize to arbitrary initial θ 0. This is intuitively expected as the optimization target (x, θ 1) = (x, θ 0 −η∇ θ0 (x, θ 0)) depends on the local behavior of (x, ·) around θ 0 (e.g., gradient magnitude), which can be drastically different across various initializations θ 0. The lower bound M ≥ D is a quite restricting one, considering that real datasets often have thousands to even hundreds of thousands of dimensions (e.g., images). This analysis motivates us to avoid the limitation of using one GD step by extending to multiple steps in the next section. We extend Algorithm 1 to more than one gradient descent steps by changing Line 6 to multiple sequential GD steps on the same batch of distilled data, i.e., each step i performs and changing Line 9 to backpropagate through all steps. We do not share the same learning rates across steps as later steps often require lower learning rates. Naively computing gradients is memory and computationally intensive. Therefore, we exploit a recent technique called back-gradient optimization, which allows for significantly faster gradient calculation in reverse-mode differentiation . Specifically, back-gradient optimization formulates the necessary second-order terms into efficient Hessian-vector products , which can be easily calculated with modern automatic differentiation systems such as PyTorch . There is freedom in choosing the distribution of initial weights p(θ 0). In this work, we explore the following four practical choices in the experiments: • Random initialization: Distribution over random initial weights, e.g., He Initialization and Xavier Initialization for neural networks. • Fixed initialization: A particular fixed network initialized by the method above. • Random pre-trained weights: Distribution over models pre-trained on other tasks or datasets, e.g., ALEXNET networks trained on ImageNet . • Fixed pre-trained weights: A particular fixed network pre-trained on other tasks and datasets. Distillation with pre-trained weights. Such learned distilled data essentially fine-tune weights pre-trained on one dataset to perform well for a new dataset, thus bridging the gap between the two domains. Domain mismatch and dataset bias represent a challenging problem in machine learning (; ;). In this work, we characterize the domain mismatch via distilled data. In Section 4.1.2, we show that a small number of distilled images are sufficient to quickly adapt convolutional neural network (CNN) models to new datasets and tasks. To guard against domain shift, several continual learning methods store a subset of training samples in a small memory buffer, and restrict future updates to maintain reasonable performance on these stored samples (; ; ;). As our distilled images contain rich information about the past training data and task, they could naturally serve as a compressed memory of the past. To test this, we modify a recent continual learning method called Gradient Episodic Memory (GEM) . GEM enforces inequality constraints such that the new model, after being trained on the new data and task, should perform at least as well as the old model on the previously stored data and tasks. Here, we store our distilled data for each task instead of randomly drawn training samples as used in GEM. We use the distilled data to construct inequality constraints, and solve the optimization using quadratic programming, same as in GEM. As shown in Section 4.2, our method compares favorably against several baselines that rely on real images. In this section, we report experiments of regular image classifications on MNIST and CIFAR10 , adaptation from ImageNet to PASCAL-VOC and CUB-200 , and continual learning on permuted MNIST and CIFAR100. Baselines. For each experiment, in addition to baselines specific to the setting, we generally compare our method against baselines trained with data derived or selected from real training images: • Random real images: We randomly sample the same number of real images per category. • Optimized real images: We sample different sets of random real images as above, and choose the top 20% best performing sets. • k-means++: We apply k-means++ clustering to each category, and extract the cluster centroids. Table 1: Comparison between our method and various baselines. All methods use ten images per category (100 in total), except for the average real images baseline, which reuses the same images in different GD steps. For MNIST, our method uses 2000 GD steps, and baselines use the best among #steps ∈ {1, 100, 500, 1000, 2000}. For CIFAR10, our method uses 50 GD steps, and baselines use the best among #steps ∈ {1, 5, 10, 20, 500}. In addition, we include a K-nearest neighbors (KNN) baseline, and report best among all combinations of distance metric ∈ {l1, l2} and one or three neighbors. • Average real images: We compute the average image for each category. Please see the appendix for more details about training and baselines, and additional . We first present experimental on training classifiers either from scratch or adapting from pre-trained weights. For MNIST, the distilled images are trained with LENET , which achieves about 99% test accuracy if conventionally trained. For CIFAR10, we use a network architecture that achieves around 80% test accuracy if conventionally trained. For ImageNet adaptations, we use an ALEXNET. We use 2000 GD steps for MNIST and 50 GD steps for CIFAR10. For random initializations and random pre-trained weights, we report means and standard deviations over 200 held-out models, unless otherwise stated. For baselines, we perform each evaluation on 200 held-out models using all possible combinations of learning rate ∈ {distilled learning ratesη *, 1e-3, 3e-3, 1e-2, 3e-2, 1e-1, 3e-1} and several choices of numbers of training GD steps (see table captions for details), and report with the best performing combination. Fixed initialization. With access to initial network weights, distilled images can directly train a fixed network to reach high performance. Experiment show that just 10 distilled images (one per class) can boost the performance of a LENET with an initial accuracy 8.25% to a final accuracy of 93.82% on MNIST in 2000 GD steps. Using 100 distilled images (ten per class) can raise the final accuracy can be raised to 94.41%, as shown in the first column of Table 1. Similarly, 100 distilled images can train a network with an initial accuracy 10.75% to test accuracy of 45.15% on CIFAR10 in 50 GD steps. Figure 2 distilled images trained with randomly sampled initializations using Xavier Initialization . While the ing average test accuracy from these images are not as high as those for fixed initialization, these distilled images crucially do not require a specific initial point, and thus could potentially generalize to a much wider range of starting points. In Section 4.2 below, we present preliminary of achieving nontrivial gains from applying such distilled images to classifier networks during a continual learning training process. Table 2: Adapting models among MNIST (M), USPS (U), and SVHN (S) using 100 distilled images. Our method outperforms few-shot domain adaptation and other baselines in most settings. Due to computation limitations, the 100 distilled images are split into 10 minibatches applied in 10 sequential GD steps, and the entire set of 100 distilled images is iterated through 3 times (30 GD steps in total). For baselines, we train the model using the same number of images with {1, 3, 5} times and report the best . Table 3: Adapting an ALEXNET pre-trained on ImageNet to PASCAL-VOC and CUB-200. We use one distilled image per category, repeatedly applied via three GD steps. Our method significantly outperforms the baselines. For baselines, we train the model with {1, 3, 5} GD steps and report the best. Results are over 10 runs. Multiple gradient descent steps. Section 3.3 has shown theoretical limitations of using only one step in a simple linear case. In Figure 3, we empirically verify for deep networks that using multiple steps drastically outperforms the single step method, given the same number of distilled images. Table 1 summarizes the of our method and all baselines. Our method with both fixed and random initializations outperforms all the baselines on CIFAR10 and most of the baselines on MNIST. Next, we show the extended setting of our algorithm discussed in Section 3.5, where the weights are not randomly initialized but pre-trained on a particular dataset. In this section, for random initial weights, we train the distilled images on 2000 pre-trained models and evaluate them on 200 unseen models. Fixed and random pre-trained weights on digits. As shown in Section 3.5, we can optimize distilled images to quickly fine-tune pre-trained models on a new dataset. Table 2 shows that our method is more effective than various baselines on adaptation between three digits datasets: MNIST, USPS , and SVHN . We also compare our method against a stateof-the-art few-shot domain adaptation method . Although our method uses the entire training set to compute the distilled images, both methods use the same number of images to distill the knowledge of target dataset. Prior work is outperformed by our method with fixed pre-trained weights on all the tasks, and by our method with random pre-trained weights on two of the three tasks. This shows that our distilled images effectively compress the information of target datasets. Fixed pre-trained ALEXNET to PASCAL-VOC and CUB-200. In Table 3, we adapt a widely used ALEXNET model pre-trained on ImageNet to image classification on PASCAL-VOC and CUB-200 datasets. Given only one distilled image per category, our method outperforms various baselines significantly. Our method is on par with fine-tuning on the full datasets with thousands of images. We modify Gradient Episodic Memory (GEM) to store distilled data for each task rather than real training images. Experiments in use large memory buffers, up to 25% of the training set. Instead, we focus on a more realistic scenario where the buffer is rather small (≤ 1% of the training set). Following the experiment settings and architecture choices from , we consider two continual learning tasks: Permuted MNIST CIFAR100 Memory size per task = 10 iCaRL -42.4 GEM 67 No memory buffer EWC 63.5 45.6 Table 4: Continual learning . Distilled images are trained with random Xavier Initialization distribution. For permuted MNIST, they are trained with 2000 GD steps. For CIFAR100, they are trained for 200 GD steps. • Permuted MNIST: 20 classification tasks each formed by using a different permutation to arrange pixels from MNIST images. Each task contains 1, 000 training images. The classifier used has 2 hidden layers each with 100 neurons. • CIFAR100: 20 classification tasks formed by splitting the 100 classes into 20 equal subsets of 5 classes. Each task contains 2, 500 training images. The classifier used is RESNET18 . Table 4 shows that using distilled data drastically improves final overall accuracy on all tasks, and reduces buffer size by up to 5× compared to the original GEM that uses real images. We only report the basic iCaRL setting on CIFAR100 because it requires similar input distributions across all tasks, and it is unclear how to properly inject distilled images into its specialized examplar selection procedure. The appendix details the hyper-parameters tested for each continual learning algorithm. In this paper, we have presented dataset distillation for compressing the knowledge of entire training data into a few synthetic training images. We demonstrate how to train a network to reach surprisingly good performance with only a small number of distilled images. Finally, the distilled images can efficiently store the memory of previous tasks in the continual learning setting. Many challenges remain for knowledge distillation of data. Although our method generalizes well to random initializations, it is still limited to a particular network architecture. Since loss surfaces for different architectures might be drastically different, a more flexible method of applying the distilled data may overcome this difficulty. Another limitation is the increasing computation and memory requirements for finding the distilled data as the number of images and steps increases. To compress large-scale datasets such as ImageNet, we may need first-order gradient approximations to make the optimization computationally feasible. Nonetheless, we are encouraged by the findings in this paper on the possibilities of training large models with a few distilled data, leading to potential applications such as accelerating network evaluation in neural architecture search . We believe that the ideas developed in this work might give new insights into the quantity and type of data that deep networks are able to process, and hopefully inspire others to think along this direction. In our experiments, we disable dropout layers in the networks due to the randomness and computational cost they introduce in distillation. Moreover, we initialize the distilled learning rates with a constant between 0.001 and 0.02 depending on the task, and use the Adam solver with a learning rate of 0.001. For random initialization and random pre-trained weights, we sample 4 to 16 initial weights in each optimization step. We run all the experiments on NVIDIA 1080 Ti, Titan Xp, and V100 GPUs. We use one GPU for fixed initial weights and up to four GPUs for random initial weights. Each training typically takes 1 to 6 hours. Below we describe the details of our baselines using real training images. • Random real images: We randomly sample the same number of real images per category. We evaluate the performance over 10 randomly sampled sets. • Optimized real images: We sample 50 sets of real images using the procedure above, pick 10 sets that achieve the best performance on 20 held-out models and 1024 randomly chosen training images, and evaluate the performance of these 10 sets. • k-means++: For each category, we use k-means++ clustering to extract the same number of cluster centroids as the number of distilled images in our method. We evaluate the method over 10 runs. • Average real images: We compute the average image of all the images in each category, which is repeated to match the same total number of images. We evaluate the model only once because average images are deterministic. To enforce our optimized learning rate to be positive, we apply softplus to a scalar trained parameter. For continual learning experiment on CIFAR10 dataset, to compare with GEM , we replace the Batch normalization with Group normalization in RESNET18 , as it is difficult to run back-gradient optimization through batch norm running statistics. For a fair comparison, we use the same architecture for our method and other baselines. For dataset distillation experiments with pre-trained initial weights, distilled images are initialized with N at the beginning of training. For other experiments, distilled images are initialized with random real samples, unless otherwise stated. For the compared continual learning methods, we report the best report from the following combinations of hyper-parameters: • GEM: -γ ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}. -learning rate = 0.1. • iCARL: -regularization ∈ {0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1.0}. -learning rate = 0.1. • Figures 4 and 5 show distilled images trained for random initializations on MNIST and CI-FAR10. • Figures 6, 7, and 8 show distilled images trained for adapting random pre-trained models on digits datasets including MNIST, USPS, and SVHN. Step: 0 LRs: 0.0073, 0.0072, 0.0153 0 1 2 3 4 5 6 7 8 9 Step: 1 LRs: 0.0156, 0.0144, 0.0246 Step: 2 LRs: 0.0114, 0.0543, 0.0371 Step: 3 LRs: 0.0151, 0.0564, 0.0631 Step: 4 LRs: 0.0161, 0.0437, 0.0441 Step: 5 LRs: 0.0538, 0.1200, 0.0960 Step: 6 LRs: 0.0324, 0.0490, 0.0362 Step: 7 LRs: 0.1045, 0.0609, 0.0532 Step: 8 LRs: 0.0375, 0.0465, 0.0437 Step: 9 LRs: 0.1236, 0.1507, 0.0439 Figure 6: Dataset distillation for adapting random pretrained models from USPS to MNIST. 100 distilled images are split into 10 GD steps, shown as 10 rows here. Top row is the earliest GD step, and bottom row is the last. The 10 steps are iterated over three times to finish adaptation, leading to a total of 30 GD steps. These images train average test accuracy on 200 held out models from 67.54% ± 3.91% to 92.74% ± 1.38%. Step: 0 LRs: 0.0038, 0.0027, 0.0063 0 1 2 3 4 5 6 7 8 9 Step: 1 LRs: 0.0035, 0.0044, 0.0030 Step: 2 LRs: 0.0039, 0.0040, 0.0047 Step: 3 LRs: 0.0035, 0.0034, 0.0026 Step: 4 LRs: 0.0029, 0.0040, 0.0050 Step: 5 LRs: 0.0022, 0.0032, 0.0027 Step: 6 LRs: 0.0031, 0.0039, 0.0019 Step: 7 LRs: 0.0005, 0.0005, 0.0024 Step: 8 LRs: 0.0008, 0.0005, 0.0018 Step: 9 LRs: 0.0018, 0.0010, 0.0009 Figure 7: Dataset distillation for adapting random pretrained models from MNIST to USPS. 100 distilled images are split into 10 GD steps, shown as 10 rows here. Top row is the earliest GD step, and bottom row is the last. The 10 steps are iterated over three times to finish adaptation, leading to a total of 30 GD steps. These images train average test accuracy on 200 held out models from 90.43% ± 2.97% to 95.38% ± 1.81%. Step: 0 LRs: 0.0050, 0.0105, 0.0119 0 1 2 3 4 5 6 7 8 9 Step: 1 LRs: 0.0099, 0.0269, 0.0162 Step: 2 LRs: 0.0049, 0.0232, 0.0160 Step: 3 LRs: 0.0143, 0.0532, 0.0438 Step: 4 LRs: 0.0072, 0.0195, 0.0389 Step: 5 LRs: 0.0228, 0.0540, 0.0382 Step: 6 LRs: 0.0392, 0.0347, 0.0489 Step: 7 LRs: 0.0277, 0.0373, 0.0308 Step: 8 LRs: 0.0525, 0.0225, 0.0192 Step: 9 LRs: 0.0321, 0.0707, 0.0250 Figure 8: Dataset distillation for adapting random pretrained models from SVHN to MNIST. 100 distilled images are split into 10 GD steps, shown as 10 rows here. Top row is the earliest GD step, and bottom row is the last. The 10 steps are iterated over three times to finish adaptation, leading to a total of 30 GD steps. These images train average test accuracy on 200 held out models from 51.64% ± 2.77% to 85.21% ± 4.73%.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryxO3gBtPB
We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance.
We relate the minimax game of generative adversarial networks (GANs) to finding the saddle points of the Lagrangian function for a convex optimization problem, where the discriminator outputs and the distribution of generator outputs play the roles of primal variables and dual variables, respectively. This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization. The inherent connection does not only provide a theoretical convergence proof for training GANs in the function space, but also inspires a novel objective function for training. The modified objective function forces the distribution of generator outputs to be updated along the direction according to the primal-dual subgradient methods. A toy example shows that the proposed method is able to resolve mode collapse, which in this case cannot be avoided by the standard GAN or Wasserstein GAN. Experiments on both Gaussian mixture synthetic data and real-world image datasets demonstrate the performance of the proposed method on generating diverse samples. Generative adversarial networks (GANs) are a class of game theoretical methods for learning data distributions. It trains the generative model by maintaining two deep neural networks, namely the discriminator network D and the generator network G. The generator aims to produce samples resembling real data samples, while the discriminator aims to distinguish the generated samples and real data samples. The standard GAN training procedure is formulated as the following minimax game: DISPLAYFORM0 where p d (x) is the data distribution and p z (z) is the noise distribution. The generated samples G(z) induces a generated distribution p g (x). Theoretically, the optimal solution to is p * g = p d and D * (x) = 1/2 for all x in the support of data distribution. In practice, the discriminator network and the generator network are parameterized by θ θ θ d and θ θ θ g, respectively. The neural network parameters are updated iteratively according to gradient descent. In particular, the discriminator is first updated either with multiple gradient descent steps until convergence or with a single gradient descent step, then the generator is updated with a single descent step. However, the analysis of the convergence properties on the training approaches is challenging, as noted by Ian Goodfellow in BID10, "For GANs, there is no theoretical prediction as to whether simultaneous gradient descent should converge or not. Settling this theoretical question, and developing algorithms guaranteed to converge, remain important open research problems.". There have been some recent studies on the convergence behaviours of GAN training (; BID18 BID14 BID24 BID22 .The simultaneous gradient descent method is proved to converge assuming the objective function is convex-concave in the network parameters . The local stability property is established in BID14 BID24.One notable inconvergence issue with GAN training is referred to as mode collapse, where the generator characterizes only a few modes of the true data distribution BID11 BID18. Various methods have been proposed to alleviate the mode collapse problem. Feature matching for intermediate layers of the discriminator has been proposed in . In BID23, the generator is updated based on a sequence of previous unrolled discriminators. A mixture of neural networks are used to generate diverse samples (; BID15 BID2 . In, it was proposed that adding noise perturbation on the inputs to the discriminator can alleviate the mode collapse problem. It is shown that this training-with-noise technique is equivalent to adding a regularizer on the gradient norm of the discriminator . The Wasserstein divergence is proposed to resolve the problem of incontinuous divergence when the generated distribution and the data distribution have disjoint supports BID12. Mode regularization is used in the loss function to penalize the missing modes BID6 ). The regularization is usually based on heuristics, which tries to minimize the distance between the data samples and the generated samples, but lacks theoretical convergence guarantee. In this paper, we formulate the minimax optimization for GAN training as finding the saddle points of the Lagrangian function for a convex optimization problem. In the convex optimization problem, the discriminator function D(·) and the probabilities of generator outputs p g (·) play the roles of the primal variables and dual variables, respectively. This connection not only provides important insights in understanding the convergence of GAN training, but also enables us to leverage the primal-dual subgradient methods to design a novel objective function that helps to alleviate mode collapse. A toy example reveals that for some cases when standard GAN or WGAN inevitably leads to mode collapse, our proposed method can effectively avoid mode collapse and converge to the optimal point. In this paper, we do not aim at achieving superior performance over other GANs, but rather provide a new perspective of understanding GANs, and propose an improved training technique that can be applied on top of existing GANs. The contributions of the paper are as follows:• The standard training of GANs in the function space is formulated as primal-dual subgradient methods for solving convex optimizations.• This formulation enables us to show that with a proper gradient descent step size, updating the discriminator and generator probabilities according to the primal-dual algorithms will provably converge to the optimal point.• This formulation in a novel training objective for the generator. With the proposed objective function, the generator is updated such that the probabilities of generator outputs are pushed to the optimal update direction derived by the primal-dual algorithms. Experiments have shown that this simple objective function can effectively alleviate mode collapse in GAN training.• The convex optimization framework incorporates different variants of GANs including the family of f -GAN and an approximate variant of WGAN. For all these variants, the training objective can be improved by including the optimal update direction of the generated probabilities. In this section, we first describe the primal-dual subgradient methods for convex optimization. Later, we explicitly construct a convex optimization and relate the subgradient methods to standard GAN training. Consider the following convex optimization problem: DISPLAYFORM0 where x ∈ R k is a length-k vector, X is a convex set, and f i (x), i = 0 · · ·,, are concave functions mapping from R k to R. The Lagrangian function is calculated as DISPLAYFORM1 In the optimization problem, the variables x ∈ R k and λ λ λ ∈ R + are referred to as primal variables and dual variables, respectively. The primal-dual pair (x *, λ λ λ *) is a saddle-point of the Lagrangian fuction, if it satisfies: DISPLAYFORM2 Primal-dual subgradient methods have been widely used to solve the convex optimization problems, where the primal and dual variables are updated iteratively, and converge to a saddle point (Nedić & ; BID16 .There are two forms of algorithms, namely dual-driven algorithm and primal-dual-driven algorithm. For both approaches, the dual variables are updated according to the subgradient of L(x(t), λ λ λ(t)) with respect to λ λ λ(t) at each iteration t. For the dual-driven algorithm, the primal variables are updated to achieve maximum of L(x, λ λ λ(t)) over x. For the primal-dual-driven algorithm, the primal variables are updated according to the subgradient of L(x(t), λ λ λ(t)) with respect to x(t). The iterative update process is summarized as follows: DISPLAYFORM3 where P X (·) denotes the projection on set X and (x) + = max(x, 0).The following theorem proves that the primal-dual subgradient methods will make the primal and dual variables converge to the optimal solution of the convex optimization problem. Theorem 1 Consider the convex optimization. Assume the set of saddle points is compact. Suppose f 0 (x) is a strictly concave function over x ∈ X and the subgradient at each step is bounded. There exists some step size α (t) such that both the dual-driven algorithm and the primal-dual-driven algorithm yield x (t) → x * and λ λ λ (t) → λ λ λ *, where x * is the solution to, and λ λ λ * satisfies DISPLAYFORM4 Proof: See Appendix 7.1. We explicitly construct a convex optimization problem and relate it to the minimax game of GANs. We assume that the source data and generated samples belong to a finite set {x 1, · · ·, x n} of arbitrary size n. The extension to uncountable sets can be derived in a similar manner BID20. The finite case is of particular interest, because any real-world data has a finite size, albeit the size could be arbitrarily large. We construct the following convex optimization problem: DISPLAYFORM0 where D is some convex set. The primal variables are DISPLAYFORM1 ) is the Lagrangian dual associated with the i-th constraint. The Lagrangian function is thus DISPLAYFORM2 When D = {D : 0 ≤ D i ≤ 1, ∀i}, finding the saddle points for the Lagrangian function is exactly equivalent to solving the GAN minimax problem. This inherent connection enables us to utilize the primal-dual subgradient methods to design update rules for D(x) and p g (x) such that they converge to the saddle points. The following theorem provides a theoretical guideline for the training of GANs. Theorem 2 Consider the Lagrangian function given by with D = {D : ≤ D i ≤ 1 −, ∀i}, where 0 < < 1/2. If the discriminator and generator have enough capacity, and the discriminator output and the generated distribution are updated according to the primal-dual update rules and FORMULA4 with DISPLAYFORM3 Proof: The optimization problem is a particularized form of, where DISPLAYFORM4 The objective function is strictly concave over D. Moreover, since D is projected onto the compact set [, 1 −] at each iteration t, the subgradients ∂f i (D (t) ) are bounded. The assumptions of Theorem 1 are satisfied. Since the constraint (8b) gives an upper bound of D i ≤ 1/2, the solution to the above convex optimization is obviously DISPLAYFORM5 Since the problem is convex, the optimal primal solution is the primal saddle point of the Lagrangian function (, Chapter 5). DISPLAYFORM6, and the saddle point is unique. By Theorem 1, the primal-dual update rules will guarantee convergence of DISPLAYFORM7 It can be seen that the standard training of GAN corresponds to either dual-driven algorithm or primal-dual-driven algorithm BID11. A natural question arises: Why does the standard training fail to converge and lead to mode collapse? As will be shown later, the underlying reason is that standard training of GANs in some cases do not update the generated distribution according to. Theorem 2 inspires us to propose a training algorithm to tackle this issue. First, we present our training algorithm. Later, we will use a toy example to give intuitions of why our algorithm is effective to avoid mode collapse. The algorithm is described in Algorithm 1. The maximum step of discriminator update is k 0. In the context of primal-dual-driven algorithms, k 0 = 1. In the context of dual-driven algorithms, k 0 is some large constant, such that the discriminator is updated till convergence at each training epoch. The update of the discriminator is the same as standard GAN training. The main difference is the modified loss function for the generator update. The intuition is that when the generated samples have disjoint support from the data, the generated distribution at the data support may not be updated using standard training. This is exactly one source of mode collapse. Ideally, the modified loss function will always update the generated probabilities at the data support along the optimal direction. The generated probability mass at DISPLAYFORM0 where 1{·} is the indicator function. The indicator function is not differentiable, so we use a continuous kernel to approximate it. Define DISPLAYFORM1 where σ is some positive constant. The constant σ is also called bandwidth for kernel density estimation. The empirical generated distribution is thus approximately calculated as. There Initialization: Choose the objective function f 0 (·) and constraint function f 1 (·) according to the GAN realization. For the original GAN based on Jensen-Shannon divergence, f 0 (D) = log (D) and f 1 (D) = log(2(1 − D)). while the stopping criterion is not met do Sample minibatch m 1 data samples DISPLAYFORM0 Update the discriminator parameters with gradient ascent: DISPLAYFORM1 end for Update the target generated distribution as: DISPLAYFORM2 where α is some step size and DISPLAYFORM3 Withp g (x i) fixed, update the generator parameters with gradient descent: DISPLAYFORM4 end while are different bandwidth selection methods BID5 BID13. It can be seen that as σ → 0, k σ (x − y) tends to the indicator function, but it will not give large enough gradients to far areas that experience mode collapse. A larger σ implies a coarser quantization of the space in approximating the distribution. In practical training, the kernel bandwidth can be set larger at first and gradually decreases as the iteration continues. By the dual update rule, the generated probability of every x i should be updated as DISPLAYFORM5 This motivates us to add the second term of in the loss function, such that the generated distribution is pushed towards the target distribution.Although having good convergence guarantee in theory, the non-parametric kernel density estimation of the generated distribution may suffer from the curse of dimension. Previous works combining kernel learning and the GAN framework have proposed methods to scale the algorithms to deal with high-dimensional data, and the performances are promising BID19 BID17 ). One common method is to project the data onto a low dimensional space using an autoencoder or a bottleneck layer of a pretrained neurual network, and then apply the kernel-based estimates on the feature space. Using this approach, the estimated probability of x i becomes DISPLAYFORM6 where f φ is the projection of the data to a low dimensional space. We will leave the work of generating high-resolution images using this approach as future work. Mode collapse occurs when the generated samples have a very small probability to overlap with some families of the data samples, and the discriminator D(·) is locally constant around the region of the generated samples. We use a toy example to show that the standard training of GAN and Wasserstein may fail to avoid mode collapse, while our proposed method can succeed. Claim 1 Suppose the data distribution is p d (x) = 1{x = 1}, and the initial generated distribution is p g (x) = 1{x = 0}. The discriminator output D(x) is some function that is equal to zero for |x − 0| ≤ δ and is equal to one for |x − 1| ≤ δ, where 0 < δ < 1/2. Standard training of GAN and WGAN leads to mode collapse. Proof: We first show that the discriminator is not updated, and then show that the generator is not updated during the standard training process. In standard training of GAN and WGAN, the discriminator is updated according to the gradient of. For GAN, since 0 ≤ D(x) ≤ 1, the objective funtion for the discriminator is at most zero, i.e., DISPLAYFORM0 which is achieved by the current D(x) by assumption. For WGAN, the optimal discrminator output D(x) is some 1-Lipschitz function such that DISPLAYFORM1 where is due to the Lipschitz condition |D − D| ≤ 1. The current D(x) is obviously optimal. Thus, for both GAN and WGAN, the gradient of the loss function with respect to θ θ θ d is zero and the discriminator parameters are not updated. On the other hand, in standard training, the generator parameters θ θ θ g are updated with only the first term of. By the chain rule, DISPLAYFORM2 where is due to the assumption that D(x) is locally constant for x = 0. Therefore, the generator and the discriminator reach a local optimum point. The generated samples are all zeros. In our proposed training method, when x = 1, the optimal update direction is given by, wherẽ p g is a large value because D = 1. Therefore, by, the second term in the loss function is very large, which forces the generator to generate samples at G(z) = 1. As the iteration continues, the generated distribution gradually converges to data distribution, and D(x) gradually converges to 1/2, which makes ∂ pg(x) L(D(x), p g (x)) = log(2(1 − D(x))) become zero. The experiment in Section 5 demonstrates this training dynamic. In this paper, the standard training of GANs in function space has been formulated as primal-dual updates for convex optimization. However, the training is optimized over the network parameters in practice, which typically yields a non-convex non-concave problem. Theorem 2 tells us that as long as the discriminator output and the generated distribution are updated according to the primal-dual update rule, mode collapse should not occur. This insight leads to the addition of the second term in the modified loss function for the generator. In Section 5, experiments on the above-mentioned toy example and real-world datasets show that the proposed training technique can greatly improve the baseline performance. Consider the following optimization problem: DISPLAYFORM0 DISPLAYFORM1 where f 0 (·) and f 1 (·) are concave functions. Compared with the generic convex optimization problem, the number of constraint functions is set to be the variable alphabet size, and the constraint functions are DISPLAYFORM2 The objective and constraint functions in can be tailored to produce different GAN variants. For example, TAB0 shows the large family of f -GAN . The last row of TAB0 gives a new realization of GAN with a unique saddle point of D * (x) = 2 and DISPLAYFORM3 We also derive a GAN variant similar to WGAN, which is named "Approximate WGAN". As shown in TAB0, the objective and constraint functions yield the following minimax problem: DISPLAYFORM4 where is an arbitrary positive constant. The augmented term D 2 (x) is to make the objective function strictly concave, without changing the original solution. It can be seen that this problem has a unique saddle point p * g (x) = p d (x). As tends to 0, the training objective function becomes identical to WGAN. The optimal D(x) for WGAN is some Lipschitz function that maximizes E x∼p d (x) {D(x)} − E x∼pg(x) {D(x)}, while for our problem is D * (x) = 0. Weight clipping can still be applied, but serves as a regularizer to make the training more robust BID21.The training algorithms for these variants of GANs follow by simply changing the objective function f 0 (·) and constraint function f 1 (·) accordingly in Algorithm 1. 5.1 SYNTHETIC DATA FIG0 shows the training performance for a toy example. The data distribution is p g (x) = 1{x = 1}. The inital generated samples are concentrated around x = −3.0. The details of the neural network parameters can be seen in Appendix 7.3. FIG0 shows the generated samples in the 90 quantile as the training iterates. After 8000 iterations, the generated samples from standard training of GAN and WGAN are still concentrated around x = −3.0. As shown in FIG0, the discrminators hardly have any updates throughout the training process. Using the proposed training approach, the generated samples gradually converge to the data distribution and the discriminator output converges to the optimal solution with D = 1/2. Fig. 2 shows the performance of the proposed method for a mixture of 8 Gaussain data on a circle. While the original GANs experience mode collapse BID15 BID23, our proposed method is able to generate samples over all 8 modes. In the training process, the bandwidth of the Gaussian kernel is inialized to be σ 2 = 0.1 and decreases at a rate of 0.8 DISPLAYFORM0, where t is the iteration number. The generated samples are dispersed initially, and then gradually converge to the Gaussian data samples. Note that our proposed method involves a low complexity with a simple regularization term added in the loss function for the generator update. Figure 2: Performance of the proposed algorithm on 2D mixture of Gaussian data. The data samples are marked in blue and the generated samples are marked in orange. We also evaluate the performance of the proposed method on two real-world datasets: MNIST and CIFAR-10. Please refer to the appendix for detailed architectures. Inception score is employed to evaluate the proposed method. It applies a pretrained inception model to every generated image to get the conditional label distribution p(y|x). The Inception score is calculated as exp (E x {KL(p(y|x) p(y)}). It measures the quality and diversity of the generated images. The MNIST dataset contains 60000 labeled images of 28 × 28 grayscale digits. We train a simple LeNet-5 convolutional neural network classifier on MNIST dataset that achieves 98.9% test accuracy, and use it to compute the inception score. The proposed method achieves an inception score of 9.8, while the baseline method achieves an inception score of 8.8. The examples of generated images are shown in Fig. 3. The generated images are almost indistinguishable from real images. We further evaluated our algorithm on an augmented 1000-class MNIST dataset to further demonstrate the robustness of the proposed algorithm against mode collapse problem. More details of the experimental can be found in the Appendix. CIFAR is a natural scene dataset of 32 × 32. We use this dataset to evaluate the visual quality of the generated samples. Table 2 shows the inception scores of different GAN models on CIFAR-10 dataset. The inception score of the proposed model is much better than the baseline method WGAN MNIST CIFAR Figure 3: Examples of generated images using MNIST and CIFAR dataset. Method ScoreReal data 11.24 ± 0.16 WGAN 3.82 ± 0.06 MIX + WGAN BID2 4.04 ± 0.07 Improved-GAN 4.36 ± 0.04 ALI BID7 5.34 ± 0.05 DCGAN 6.40 ± 0.05Proposed method 4.53 ± 0.04 Table 2: Inception scores on CIFAR-10 dataset.that uses similar network architecture and training method. Note that although DCGGAN achieves a better score, it uses a more complex network architecture. Examples of the generated images are shown in Fig. 3. In this paper, we propose a primal-dual formulation for generative adversarial learning. This formulation interprets GANs from the perspective of convex optimization, and gives the optimal update of the discriminator and the generated distribution with convergence guarantee. By framing different variants of GANs under the convex optimization framework, the corresponding training algorithms can all be improved by pushing the generated distribution along the optimal direction. Experiments on two synthetic datasets demonstrate that the proposed formulation can effectively avoid mode collapse. It also achieves competitive quantitative evaluation scores on two benchmark real-world image datasets. The proof of convergence for dual-driven algorithms can be found in BID4, Chapter 3).The primal-dual-driven algorithm for continuous time update has been studied in BID8. Here, we show the convergence for the discrete-time case. We choose a step size α(t) that satisfies DISPLAYFORM0 Let z(t) = [x(t), λ λ λ(t)] T be a vector consisting of the primal and dual variables at the t-th iteration. The primal-dual-driven update can be expressed as: DISPLAYFORM1 where DISPLAYFORM2 and DISPLAYFORM3 Since the subgradient is bounded by assumption, there exists M > 0 such that ||T (·)|| 2 2 < M, where ||.|| 2 stands for the L 2 norm. Modes generated Inception Score BID23, and the networks are trained with Root Mean Square Propagation (RMSProp) with a learning rate of 1e-4. For GAN, the networks are trained with Adam with a learning rate of 1e-4. The minibatch size is 32. The bandwidth parameter for the Gaussian kernel is initialized to be σ = 0.5 and then is changed to 0.1 after 2000 iterations. We use the network structure in BID23 to evaluate the performance of our proposed method. The data is sampled from a mixture of 8 Gaussians of standard deviation of 0.02 uniformly located on a circle of radius 2. The noise samples are a vector of 256 independent and identically distributed (i.i.d.) Gaussian variables with mean zero and standard deviation of 1.The generator has two hidden layers of size 128 with ReLU activation. The last layer is a linear projection to two dimensions. The discriminator has one hidden layer of size 128 with ReLU activation followed by a fully connected network to a sigmoid activation. All the biases are initialized to be zeros and the weights are initalilzed via the "Xavier" initialization BID9. The training follows the primal-dual-driven algorithm, where both the generator and the discriminator are updated once at each iteration. The Adam optimizer is used to train the discriminator with 8e-4 learning rate and the generator with 4e-4 learning rate. The minibatch sample number is 64. For MNIST dataset, the generator network is a deconvolutional neural network. It has two fully connected layer with hidden size 1024 and 7 × ×7 × 128, two deconvolutional layers with number of units 64, 32, stride 2 and deconvolutional kernel size 4 × 4 for each layer, respectively, and a final convolutional layer with number of hidden unit 1 and convolutional kernel 4 × 4.. The discriminator network is a two layer convolutional neural network with number of units 64, 32 followed by two fully connected layer of hidden size 1024 and 1. The input noise dimension is 64.We employ ADAM optimization algorithm with initial learning rate 0.01 and β = 0.5. For CIFAR dataset, the generator is a 4 layer deconvolutional neural network, and the discriminator is a 4 layer convolutional neural network. The number of units for discriminator is, and the number of units for generator is. The stride for each deconvolutional and convolutional layer is two. We employ RMSProp optimization algorithm with initial learning rate of 0.0001, decay rate 0.95, and momentum 0.1.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJNRFNlRW
We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse.
Specifying reward functions is difficult, which motivates the area of reward inference: learning rewards from human behavior. The starting assumption in the area is that human behavior is optimal given the desired reward function, but in reality people have many different forms of irrationality, from noise to myopia to risk aversion and beyond. This fact seems like it will be strictly harmful to reward inference: it is already hard to infer the reward from rational behavior, and noise and systematic biases make actions have less direct of a relationship to the reward. Our insight in this work is that, contrary to expectations, irrationality can actually help rather than hinder reward inference. For some types and amounts of irrationality, the expert now produces more varied policies compared to rational behavior, which help disambiguate among different reward parameters -- those that otherwise correspond to the same rational behavior. We put this to the test in a systematic analysis of the effect of irrationality on reward inference. We start by covering the space of irrationalities as deviations from the Bellman update, simulate expert behavior, and measure the accuracy of inference to contrast the different types and study the gains and losses. We provide a mutual information-based analysis of our findings, and wrap up by discussing the need to accurately model irrationality, as well as to what extent we might expect (or be able to train) real people to exhibit helpful irrationalities when teaching rewards to learners. The application of reinforcement learning (RL) in increasingly complex environments has been most successful for problems that are already represented by a specified reward function (; ; . Unfortunately, not only do real-world tasks usually lack an explicit exogenously-specified reward function, but attempting to specify one tends to lead to unexpected side-effects as the agent is faced with new situations . This has motivated the area of reward inference: the process of estimating a reward function from human inputs. The inputs are traditionally demonstrations, leading to inverse reinforcement learning (IRL) or inverse optimal control (IOC) (; ; ;). Recent work has expanded the range of inputs significantly,to comparisons (; ;), natural language instructions , physical corrections , proxy rewards ), or scalar reward values . The central assumption behind these methods is that human behavior is rational, i.e. optimal with respect to the desired reward (cumulative, in expectation). Unfortunately, decades of research in behavioral economics and cognitive science has unearthed a deluge of irrationalities, i.e. of ways in which people deviate from optimal decision making: hyperbolic discounting, scope insensitivity, optimism bias, decision noise, certainty effects, loss aversion, status quo bias, etc. Work on reward inference has predominantly used one model of irrationality: decision-making noise, where the probability of an action relates to the value that action has. The most widely used model by far is a Bolzmann distribution stemming from the Luce-Sherpard rule (; ;) and the principle of maximum (causal) entropy in (;, which we will refer to as Bolzmann-rationality . Recent work has started to incorporate systematic biases though, like risk-aversion , having the wrong dynamics belief , and myopia and hyperbolic discounting . Learning from irrational experts feels like daunting task: reward inference is already hard with rational behavior, but now a learner needs to make sense of behavior that is noisy or systematically biased. Our goal in this work is to characterize just how muddied the waters are -how (and how much) do different irrationalities affect reward inference? Our insight is that, contrary to expectations, irrationality can actually help, rather than hinder, reward inference. Our explanation is that how good reward inference is depends on the mutual information between the policies produced by the expert and the reward parameters to be inferred. While it is often possible for two reward parameters to produce the same rational behavior, irrationalities can sometimes produce different behaviors that disambiguate between those same two reward parameters. For instance, noise can help when it is related to the value function, as Boltzmann noise is, because it distinguishes the difference in values even when the optimal action stays the same. Optimism can be helpful because the expert takes fewer risk-avoiding actions and acts more directly on their goal. Overall, we contribute 1) an analysis and comparison of the effects of different biases on reward inference testing our insight, 2) a way to systematically formalize and cover the space of irrationalities in order to conduct such an analysis, and 3) evidence for the importance of assuming the right type of irrationality during inference. Our good news is that irrationalities can indeed be an ally for inference. Of course, this is not always true -the details of which irrationality type and how much of it also matter. We see these as opening the door to a better understanding of reward inference, as well as to practical ways of making inference easier by asking for the right kind of expert demonstrations -after all, in some cases it might be easier for people to act optimistically or myopically than to act rationally. Our reinforce that optimal teaching is different from optimal doing, but point out that some forms of teaching might actually be easier than doing. Our goal is to explore the effect irrationalities have on reward inference if the learner knows about them -we explore the need for the learner to accurately model irrationalities in section 4.2. While ideally we would recruit human subjects with different irrationalities and measure how well we can learn rewards, this is prohibitive because we do not get to dictate someone's irrationality type: people exhibit a mix of them, some yet to be discovered. Further, measuring accuracy of inference is complicated by the fact that we do not have ground truth access to the desired reward: the learner can measure agreement with some test set, but the test set itself is produced subject to the same irrationalities that produced the training data. As experimenters, we would remain deluded about the human's true intentions and preferences. To address this issue, we simulate expert behavior subject to different irrationalities based on ground truth reward functions, run reward inference, and measure the performance against the ground truth, i.e. the accuracy of a Bayesian posterior on the reward function given the (simulated) expert's inputs. There are many possible irrationalities that people exhibit , far more than what we could study in one paper. They come with varying degrees of mathematical formalization and replication across human studies. To provide good coverage of this space, we start from the Bellman update, and systematically manipulate its terms and operators to produce a variety of different irrationalities that deviate from the optimal MDP policy in complementary ways. For instance, operating on the discount factor can model more myopic behavior, while operating on the transition function can model optimism or the illusion of control. Figure 1 summarizes our approach, which we detail below. Figure 1: We modify the components of the Bellman update to cover different types of irrationalities: changing the max into a softmax to capture noise, changing the transition function to capture optimism/pessimism or the illusion of control, changing the reward values to capture the nonlinear perception of gains and losses (prospect theory), changing the average reward over time into a maximum (extremal), and changing the discounting to capture more myopic decision-making. The rational expert does value iteration using the Bellman update from figure 1. Our models change this update to produce different types of non-rational behavior. Boltzmann-rationality modifies the maximum over actions max a with a Boltzmann operator with parameter β: ) This models that people will not be perfect, but rather noisily pick actions in a way that is related to the Qvalue of those actions. The constant β is called the rationality constant, because as β → ∞, the human choices approach perfect rationality (optimality), whereas β = 0 produces uniformly random choices. This is the standard assumption for reward inference that does not assume perfect rationality, because it easily transforms the rationality assumption into a probability distribution over actions, enabling learners to make sense of imperfect demonstrations that otherwise do not match up with any reward parameters. Our next set of irrationalities manipulate the transition function away from reality. Illusion of Control. Humans often overestimate their ability to control random events. To model this, we consider experts that use the Bellman update: where T n (s |s, a) ∝ (T (s |s, a)) n. As n → ∞, the demonstrator acts as if it exists in a deterministic environment. As n → 0, the expert acts as if it had an equal chance of transitioning to every possible successor state. At n = 1, the expert is the rational expert. Optimism/Pessimism. Humans tend to systematically overestimate their chance experiencing of positive over negative events. We model this using experts that modify the probability they get outcomes based on the value of those outcomes: where T 1/τ (s |s, a) ∝ T (s |s, a)e (r(s,a,s)+γVi(s))/τ. 1/τ controls how pessimistic or optimistic the expert is. As 1/τ → +∞, the expert becomes increasingly certain that good transitions will happen. As 1/τ → −∞, the expert becomes increasingly certain that bad transitions will happen. As 1/τ → 0, the expert approaches the rational expert. Next, we consider experts that use the modified Bellman update: where f: R → R is some scalar function. This is equivalent to solving the MDP with reward f • r. This allows us to model human behavior such as loss aversion and scope insensitivity. inspires us to consider a particular family of reward transforms: c controls how loss averse the expert is. As c → ∞, the expert primarily focuses on avoiding negative rewards. As c → 0, the expert focuses on maximizing positive rewards and 2.2.5 MODIFYING THE SUM BETWEEN REWARD AND FUTURE VALUE: EXTREMAL Extremal. Humans seem to exhibit duration neglect, sometimes only caring about the maximum intensity of an experiennce . We model this using experts that use the Bellman step: These experts maximize the expected maximum reward along a trajectory, instead of the expected sum of rewards. As α → 1, the expert maximizes the expected maximum reward they achieve along their full trajectory. As α → 0, the expert becomes greedy, and only cares about the reward they achieve in the next timestep. Myopic Discount. In practice, humans are often myopic, only considering immediate rewards. One way to model this is to decrease gamma in the Bellman update. At γ = 1, this is the rational expert. As γ → 0, the expert becomes greedy and only acts to maximize immediate reward. Myopic VI. As another way to model human myopia, we consider a expert that performs only h steps of Bellman updates. That is, this expert cares equally about rewards for horizon h, and discount to 0 reward after that. As h → ∞, this expert becomes rational. If h = 1, this expert only cares about the immediate reward. Hyperbolic Discounting. Human also exhibit hyperbolic discounting, with a high discount rate for the immediate future and a low discount rate for the far future. formulate this as the following Bellman update: k modulates how much the expert prefers rewards now versus the future. As k → 0, this expert becomes the rational expert. 3.1 EXPERIMENTAL DESIGN Simulation Environment. To reduce possible confounding from our choice of environment, we used a small 5x5 gridworld where the irrationalities nonetheless cause experts to exhibit different behavior. Our gridworld consists of three types of cells: ice, holes, and rewards. The expert can start in any ice cell. At each ice cell, the expert can move in one of the four cardinal directions. With Figure 2: The log loss (lower = better) of the posterior as a function of the parameter we vary for each irrationality type. These six irrationalities all have parameter settings that outperform rational experts. For the models that interpolate to rational expert, we denote the value that is closest to rational using a dashed vertical line. probability 0.8, they will go in that direction. With probability 0.2, they will instead go in one of the two adjacent directions. Holes and rewards are terminal states, and return the expert back to their start state. They receive a penalty of −10 for falling into a hole and θ i ∈ for entering into the ith reward cell. Dependent Measures. To separate the inference difficulty caused by suboptimal inference from the difficulty caused by expert irrationality, we perform the exact Bayesian update on the trajectory θ , which gives us the posterior on θ given ξ: We use two metrics to measure the difficulty of inference The first is the expected log loss of this posterior, or negative log-likelihood: A low log loss implies that we are assigning a high likelihood to the true θ. As we are performing exact Bayesian inference with the true model P (ξ|θ) and prior P (θ), the log loss is equal to the entropy of the posterior H(θ|ξ). The second metric is the L 2 -distance between the mean posterior θ and the actual theta: The closer the inferred posterior mean of θ is to the actual value θ *, the lower the loss. For each irrationality type, we calculate the performance of reward inference on trajectories of a fixed length T, with respect to the two metrics above. To sample a trajectory of length T from a expert, we fix θ * and start state s. Then, we perform the expert's (possibly modified) Bellman updates until convergence to recover the policy π θ *. Finally, we generate rollouts starting from state s until T state, action pairs have been sampled from π θ *. Figure 3: A best case analysis for each irrationality type: the log loss/L 2 distance from mean (lower=better) for experts, as a function of the length of trajectory observed. Each irrationality uses the parameter value that is most informative. As discussed in section 3.2, different irrationality types have different slopes and converge to different values. In addition, the best performing irrationality type according to log loss is not the best performing type according to L 2 loss. Impact of Each Irrationality. We found that of the 8 irrationalities we studied, 6 had parameter settings that lead to lower log loss than the rational expert. We report how the parameter influences the log loss for each of these experts in figure 2. 1 For T = 30, Optimism with 1/τ = 3.16 performed the best, followed by Boltzmann with β = 100 and Hyperbolic with k = 0.1. Both forms of Myopia also outperformed the rational expert, with best performance occurring at γ = 0.9 and h = 5. Finally, the Extremal expert also slightly outperformed the rational expert, with best performance at α = 0.9. Notably, in every case, neither the most irrational expert nor the perfectly rational expert was the most informative. Impact of Data for Different Irrationalities. Next, we investigate how the quality of inference varies as we increase the length of the observed trajectory T. We report our for the best performing parameter for each irrationality type in figure 3. Interestingly, while both metrics decrease monotonically regardless of irrationality type, the rate at which they decrease differs by the irrationality type, and the best performing irrationality type according to log loss (Optimism) is not the best performing type according to L 2 distance (Boltzmann). What is behind these differences? To explain these , we use the notion of mutual information I(X; Y) between two variables, defined as: The mutual information measures how much our uncertainty about X decreases by observing Y. For reward inference, the term we care about is the mutual information between the expert's trajectory and the reward parameters The mutual information I(θ; ξ) is equal to a constant minus the posterior log loss under the true model. A expert with mutual information will cause the learner to have a lower posterior log loss., 4): when the reward is sufficiently large, the expert becomes convinced that no action it takes will lead to the reward, leading it to perform random actions. Figure 5: (a) Boltzmann-rationality produces different policies for θ * = vs. θ * =: when ||θ|| is larger, the policy becomes closer to that of the rational expert. (b) A Myopic expert produces different policies for θ * = vs. θ * =: while the rational expert always detours around the hole and attempts to reach the larger reward, myopia causes the myopic expert to go for the smaller source of reward when it is non-zero. By the information processing inequality, we have the bound I(θ; ξ) ≤ I(θ; π). To have higher mutual information, different θs should be mapped to different policies πs. Indeed, we found that the experts that were able to outperform the rational expert were able to disambiguate between θs that the rational expert could not. To visualize this, we show examples of how the policy of several irrational experts differ when the rational expert's policies are identical in figures 4 and 5. We plot the correlation between I(θ; ξ) and I(θ; π) in figure 6. Experts that have more informative policies tend to have more informative trajectories, but the correlation is not perfect. Notably, the Optimism expert has the most informative trajectories of length 30, but has less informative policies than the Boltzmann expert. In the limit of infinite data from every state, we would have I(θ; ξ) → I(θ; π). However, as each trajectory begins from the same start state, and not every state is reachable with every policy, the bound is not achievable in general, even if we observe an arbitrarily large number of trajectories. This highlights the need for off-policy data in reward inference tasks. We show that, contrary to what we might expect, suboptimal experts can actually help an agent learn the reward function. Optimism bias, myopia (via heavier discounting or hyperbolic discounting), Figure 6: The informativeness of policies correlates with the informativeness of trajectories of length 30, as discussed in section 3.2 and noise via Boltzmann rationality were the most informative irrationalities in our environments, far surpassing the performance of the rational expert for their ideal settings. Our contribution overall was to identify a systematic set of irrationalities by looking at deviations in the terms of the Bellman update, and show that being irrational is not automatically harmful to inference by quantifying and comparing the inference performance for these different types. Estimating expert irrationality. One major limitation of our work is that our findings hold for when the learner knows the type and parameter value of the irrationality. In practice, reward inference will require solving the difficult task of estimating the irrationality type and degree . We still need to quantify to what extent these still hold given uncertainty about the irrationality model. It does, however, seem crucial to reward inference that learners do reason explicitly about irrationality -not only is the learner unable to take advantage of the irrationality to make better inference if it does not model it, but actually reward inference in general suffers tremendously if the learner assumes the wrong type. In figure 10 in the Appendix, we compare inference with the true model vs. with assuming a Boltzmann model as default. The are quite striking: not knowing the irrationality harms inference tremendously. Whether irrationalities help, this means that it is really important to model them. Generalization to other environments. A second limitation of our work is that we only tested these models in a limited range of environments. Further work is needed to test generalization of our findings across different MDPs of interest. Our analysis of mutual information lends credence to the Boltzmann rationality generalizing well: these policies are much more varied with the reward parameters. In contrast, how useful the optimism bias is depends on the task: if we know about what to avoid already, as was the case for our learner, the bias is useful; if, on the other hand, we would know the goal but do not know what to avoid, the bias can hinder inference. Overall, this paper merely points out that there is a lot of richness to the ways in which these biases affect inference, and provides a quantitative comparison for a starting domain -much more is needed to gain a deeper understanding of this phenomenon. Applications to real humans. A third limitation is that we do not know where real humans lie. Do they have the helpful irrationality types? Do they fall in the range of parameters for these types that help inference? And what happens when types combine? While these questions are daunting, there is also a hidden opportunity here: what if we could influence humans to exhibit helpful types of irrationality? It might be much easier for them, for instance, to act myopically than to act rationally. In the end, reward inference is the confluence of two factors: how well the robot learns, and how well the teacher teaches. Our point out that it might be easier than previously thought to be a good teacher -even easier than being a rational expert., 1.78, 3.16, 5.62, 10, 17.8, 31.6, 56.2, 100, 178, 316, 562, 1000,1780, 3160, 5620, To enable exact inference, we discretized θ, using 5 evenly spaced points for each θ i. Our specific grid is included in figures 4 and 5 As there are two reward cells, this gives us 25 possible distinct reward parameters. We assumed a uniform prior on the reward parameter. We list the parameter values we search over for each policy in table 1. Except for myopic γ and myopic h, we use γ = 0.99. For myopic h, we use γ = 1. From each start state, we sample 10 trajectories of each length for each reward parameter, policy combination. We include the plots for the log loss of trajectories from the Prospect Theory and Illusion of Control experts in 7 In addition, we include the plots for the L 2 loss for all 8 irrationalities in figures 8 and figure 9. Given that several types of irrationality can help inference when the form of irrationality is known, a natural question to ask is how important is it to known the irrationality exactly. To investigate this, we plot the log loss of the posterior of a learner who falsely assumes that the expert is Boltzmann- Figure 8: The L 2 distance (lower = better) of posterior mean of θ to the true θ *,s as a function of the parameter we vary for each irrationality type. These six irrationalities all have parameter settings that outperform rational experts. For the models that interpolate to rational expert, we denote the value that is closest to rational using a dashed vertical line. A comparison of reward inference using a correct model of the irrationality type, versus always using a Boltzman model. (Lower log loss = better.) The inference impairment from using the misspecified irrationality model (Boltzmann) greatly outweighs the variation in inference performance caused by the various irrationality types themselves. Hence, compared to using a misspecified model of irrationality, expert irrationality is not in itself a major impairment to reward inference, and sometimes expert irrationality can even helps when a model of the irrationality is known. rational with β = 100. Where applicable, the log loss is averaged over possible hyperparameter settings for the expert. We report the in figure 10. The log loss of the posterior if we wrongly imagine the expert is Boltzmann-rational far outweighs differences between particular irrationality types. Fundamentally, misspecification is bad for inference because different experts might exhibit the same action only under different reward parameters. For example, consider figure the case where the actual expert is myopic, with small n. Then the myopic agent might go toward a closer reward even if it is much smaller, as shown in figure 11. This would cause the learner to falsely infer that the closer reward is quite large, leading to a posterior with extremely high log loss when the reward is actually smaller. Figure 11: An example of why assuming Boltzmann is bad for a myopic agent -the Boltzmann rational agent would take this trajectory only if the reward at the bottom was not much less than the reward at the top. The myopic agent with n ≤ 4, however, only "sees" the reward at the bottom. Consequently, inferring the preferences of the myopic agent as if it were Boltzmann leads to poor performance in this case.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJlo91BYPr
We find that irrationality from an expert demonstrator can help a learner infer their preferences.
Natural Language Processing models lack a unified approach to robustness testing. In this paper we introduce WildNLP - a framework for testing model stability in a natural setting where text corruptions such as keyboard errors or misspelling occur. We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on aspects introduced in the framework. In particular, we focus on a comparison between recent state-of-the- art text representations and non-contextualized word embeddings. In order to improve robust- ness, we perform adversarial training on se- lected aspects and check its transferability to the improvement of models with various cor- ruption types. We find that the high perfor- mance of models does not ensure sufficient robustness, although modern embedding tech- niques help to improve it. We release cor- rupted datasets and code for WildNLP frame- work for the community. Adversarial examples have been shown to severely degrade performance of deep learning models BID10 BID14. Natural Language Processing systems are no different in this respect. Multiple areas of NLP, such as machine translation BID1, question answering BID12, or text classification have been studied to assess the impact of adversaries generated with various methods. However, these works tend to focus on one area only, often with attacks designed just for the selected problem. It makes comparisons between models, datasets, and NLP areas impossible. In particular, the robustness of modern word embedding systems -such as ELMo BID17, Flair BID0 and language model based BERT BID5 remains unstudied. In this article, we evaluate the behavior of natural language models in the wild. We propose WildNLP -a systematic and comprehensive robustness testing framework which can be used for any NLP model. Instead of focusing on elaborate attacks, which are unlikely to originate by accident, we measure the quality of models in a natural setting, where input data is poisoned with errors involuntarily generated by actual users. We put these notions into a set of tests called aspects. Moreover, we introduce the concept of corruption severity and prove that it is critical to model improvement via adversarial training. The framework is aimed at any NLP problem irrespective of its form of input and output. In summary, our contributions are the following:1. We offer a systematic framework for testing corruption robustness -the WildNLP.In total, we introduce 11 aspects of robustness testing, with multiple severity levels. We release the code and a collection of popular datasets that are corrupted with WildNLP for the community 1. The framework is easy to extend. New aspects can be defined by the community.2. We test corruption robustness of a number of NLP tasks: question answering (Q&A), natural language inference (NLI), named entity recognition (NER), and sentiment analysis (SA). We verify stability of models trained on contextualized embeddings like ELMo and Flair in contrast to noncontextualized FastText BID2 and GloVe BID16.We also analyze BERT in the task of Q&A. We find that new forms of text representation, despite greater contextual awareness, do not offer a sufficient increase in robustness.3. We find that model training on one aspect does improve performance on another aspect, contrary to previous studies BID1. For this to be true, two corruption types must be similar to some extent. In section 2 we present related literature in the domain of NLP robustness. In section 3 we present WildNLP framework, describing in detail each introduced aspect. In section 4 we compare robustness of NER, Q&A, NLI and Sentiment Analysis. In section 5 we perform adversarial training on Qwerty aspect with different severities and test these models on other aspects. We conclude in section 6. The problem of natural noise in textual data has been studied by BID1, however exclusively in the context of character-based machine translation models. They find that errors such as typos and misspelling cause significant drops in BLEU scores. Other recent approaches to generating textual adversaries include the work of, who exploit important word manipulations for text classification models from 2014 and 2015. BID7 identify important words and apply 4 kinds of character perturbations: swap, substitution, deletion and insertion. They test on vanilla LSTM and CNN model, applying them to 8 datasets. Among others, they aim for the character swaps to map a word vector to an'unknown' vector in traditional word embeddings. BID19 create rules of substitutions between texts which produce correct and semantically identical samples in Q&A domain. BID9 design adversaries for NLI systems, swapping words which share a relation such as antonymy or co-hyponymy. 3. Targeted robustness. These are the attacks designed for a specific problem and/or dataset, or demanding access to model internals. An example is the whole class of white box attacks BID6 as well as highly specialized attacks BID12. The WildNLP aspects define classes of common disturbances found in natural text. These corruptions are produced naturally due to haste, lacking space, individual writing habits or imperfect command of English. Articles. Randomly removes or swaps articles into wrong ones. Swap. Randomly shuffles two characters within a word. Qwerty. Simulates errors made while writing on a QWERTY-type keyboard. Characters are swapped for their neighbors on the keyboard. Remove char. Randomly removes characters from words. Remove space Removes a space from text, merging two words. Misspelling. Misspells words appearing in the Wikipedia list of commonly misspelled English words 2.Digits2words. Rewrites digit numbers into words. Homophones. Changes words into their homophones from the Wikipedia list of common misspellings/homophones 3. The list contains around 500 pairs or triples of homophonic words. Negatives. This aspect reflects attempts made by some Internet users to mask profanity Warsaw was believed to be one of the most beautiful cities in the world. Warsaw was believed to be one of a most beautiful cities in world. Warsaw aws believed to be one fo teh most beautiful cities in the world. Wadsaw was bdlieved to be one of the most beautiful citiee in the world.. Warsaw was believed to be one o th most eautiful cities in the world. Warsaw was believed tobe one of the most beautiful cities in the world. You cannot accidentally commit vandalism. Vandalism used to be a rare occurrence. You can not accidentaly commit vandalism. Vandalism used to be a rare occurrance. Bus Stops for Route 6, 6.1 Digits2words Bus Stops for Route six, six point one OriginalChoosing between affect and effect can be scary. Choosing between effect and effect can bee scary. Original Laughably foolish or false: an absurd explanation. Negatives Laughab*y fo*lish or fal*e: an a*surd explanation. OriginalSometimes it is good to be first, and sometimes it is good to be last. Sometimes it is go*d to be first, and sometimes it is goo* to be last. Sometimes, it is good to be first and sometimes, it, is good to be last.or hate speech in online forums to evade moderation. We perform masking of negative words from Opinion Lexicon 4. The lexicon contains a list of English positive and negative opinion words or sentiment words, in total around 6800 words. Positives. Masks positive words from Opinion Lexicon, similarly as in the case of Negatives (described above).Marks. Randomly removes and insert punctuation marks. Marks are inserted between last letter of a word and space. The severity of perturbations can be varied. In the case of Swap, Qwerty and Remove char we control it by defining how many words will be affected. In the case of Article, it is defined by a probability of corruption of each article. We test corruption robustness on various NLP tasks and models. Each of the models is run on the specific dataset it has been trained on in the original setting. The datasets are preprocessed by the WildNLP framework to obtain corrupted data with multiple aspects. An important point in the experimental setting is the application of various word embeddings. We focus on testing the robustness of models trained with newly introduced context-aware embeddings: ELMo, Flair and language model based BERT. We compare their performance on corrupted data to older embedding systems -GloVe, FastText (within InferSent) and in the case of one of sentiment analysis models, even one-hot encoded words. We do so to verify the assumption that greater context awareness and lack of problems with out-of-vocabulary (OOV) words in ELMo, Flair and BERT would increase robustness of models. We use our framework on the selection of well known models that are widely used in NLP community. For training ELMo-based models we use open-source implementations available in AllenNLP BID8, for BERT we follow implementation of HuggingFace 5 and for the rest of the models we use original author research code. In particular, following models and datasets are used in experiments:• Q&A task Models. We test BiDAF and BERT trained on the SQuAD dataset BID18. We analyze two versions of BiDAFwith ELMo (BiDAF-E) and GloVe (BiDAF-G) embeddings. BiDAF uses character and word embeddings with a bidirectional attention flow to obtain a query-aware context representation. BiDAF is one of the state-of-theart models on the SQuAD leaderboard. On the other hand, BERT applies a bidirectional Transformer to language modeling task and is currently used with great success in various NLP tasks, achieving the new state-ofthe-art. We evaluate the models with the common performance scores in Q&A task, which are Exact Match (EM) and F1 score. Dataset. SQuAD dataset comprises around 100,000 question-answer pairs prepared by crowdworkers. The dataset is based on Wikipedia articles. TAB3 displays examples of the question-answer pairs.• Table 3 contains an example of the three possible entailment relations.• NER task Models. We use two sequence tagging models with ELMo implementation (CRF-E) and Flair BID0. Flair comprises new word embeddings an a BiLSTM-CRF sequence labeling system. It models words as sequences of characters, which allows to effectively eliminate the notion of separate tokens. Flair is currently the state-of-the-art model in NER task. Dataset. The CoNLL 2003 dataset is a standard training dataset used in NER sequence tagging. It is a collection of news articles from Reuters corpus annotated as Person, Organization, Location, Miscellaneous, or Other for non-named entities. Due to licensing agreement this is the only corrupted dataset that we cannot release.• SA task Models. We use the current state-of-the-art ULMFiT model BID11 that consists of language model pretrained on Wikipedia and fine-tuned on the specific text corpus that is used in classification task. In adversarial training scenario, we pretrain this language model on corrupted data. We TAB1 and TAB3. For Q&A models, EM measure is displayed.compare ULMFiT with CNN based classification model, which uses one-hot encoding of words. Dataset. We train and test described models on IMDB dataset that consists of 25000 positive and 25000 negative reviews of movies. Figure 1 (Q&A models) and Figure 2 (other models) present aggregate of testing on all models and all corruption aspects. Variability and scale of performance drops are depicted in FIG2. Tables with full can be found in Appendix A. Robustness measure. To comprehensively measure model robustness to corruptions, we calculate an overall mean of drops across all aspects (Av-Drop). We use this aggregated metric to compare robustness between models. Q&A. The robustness of Q&A models was the lowest of all tested tasks. The corruptions which proved most damaging to the performance and in to Av-Drop were the following: Swap 5 (32 -37 EM drop), Remove char 5 (29 -37 EM drop), Qwerty 5 (25 -30 EM drop).BERT and ELMo-based systems were found to mitigate performance loss to some degree compared to GloVe. However, their performance loss pattern across corruptions was similar to GloVe, and the difference of Av-Drop between BERT (most robust model) and BiDAF GloVe (least robust model) was 2.8 pp, despite huge performance differences reflected in F1 and EM.We observe that severity of aspects plays an important role in drop of performance metrics across all Q&A models. For aspects that corrupt individual words like Qwerty, Remove char or Swap, drop in performance of GloVe-based models is intuitive -we substitute words from out of vocabulary (OOV) with unknown token. However, in the case of ELMo and BERT the problem of OOV tokens is not that severe -they are character or subword-based. Still, we observe an average drop of F1 metric on these three aspects (severity 5) at the level of 23.04 (BiDAF-E) and 24.46 (BERT) in comparison to drop of BiDAF-G at 32.9. Lower severities of word corruptions induce much lower drops -in case of severity 1 it is still a noticeable difference of 4.48 (BiDAF-E), 3.44 (BERT) and 5.63 (BiDAF-G).WildNLP also tests on aspects that do not alter words but sentences. As previously, we state that context-aware models should be indifferent to such changes as they do not alter sentence meaning. However, we observe that aspects such as Remove space and Marks decrease F1 values among all Q&A even by 8.89 in case of Remove space tested with BiDAF-E, whereas BERT proves to be more robust to this sentencelevel corruption with drop of F1 at 2.47.NLI. Natural Language Inference task tested by WildNLP framework is more robust when trained with decomposable attention model with ELMo embeddings (Dec-E) rather than simple MLP classifier that uses sentence embeddings created by InferSent method (InferSent). The Av-Drop for Dec-E is half the value of Av-Drop for InferSent, being at the level of 4.19. On all sets of aspects, Dec-E model has lower drops of performance metric. However, it still has relatively high drops when it comes to word corruption aspects like Qwerty, Remove char or Swap, with average drop of 10.92 at severity 5 and 2.09 at severity 1. InferSent performs worse by around 3 pp (5.56 and 12.82 respectively).However, when we consider sentence level aspects like adding extra commas to the sentence, Dec-E model is very robust, having only 0.85 of drop in accuracy on highest possible severity. NER. Both NER models seems to be robust, having the Av-Drop measure at the level of 2.37 (CRF-E) and 2.14 (Flair). However, in the case of state-of-the-art NER models, differences in performance are so small, that such relatively small values of Av-Drop can be seen as high. As we processed only non-NE words with WildNLP framework, we assume that model predictions of named entities are dependent on surrounding context words. SA. ULMFiT model was found to be slightly less robust than CNN using one-hot encodings (2.36 vs 2.28 of Av-Drop). Drop in performance of the CNN model was mainly caused by Positives and Negatives corruptions (7.22 and 9.7 Av-Drop) which can be observed as the two outliers in FIG2. Presumably this behavior is caused by the model's focus on detecting sentiment-carrying words, which were on average rarely affected by other corruptions. On the other hand, ULMFiT was less affected by Positives and Negatives corruptions (3.6 and 4.2 AvDrop) probably because of its reliance on context and more subtle expressions of sentiment. In spite of the fact that the CNN model suffered from out-of-vocabulary words problem (corrupted words were simply unrecognized) while ULMFiT did not, the CNN proved more robust to most deformations in WildNLP framework. We use adversarial training to research the potential of overcoming corruption errors. We validate two hypotheses:1. Adversarial training on data corrupted with aspects of greater severity helps to resolve problems with data corrupted with lesser severity. For example, training on Qwerty 5-corrupted data should increase performance of data corrupted with Qwerty 1 up to Qwerty 5 severities.2. Adversarial training on one corruption type should increase model robustness to other corruptions. However, BID1 suggest that this is not the case. They find that models trained on one type of noise do not perform well on others in character-based translation models. Based on this, we hope to prove that robustness can be improved between aspects which are related. Corruption severity. In agreement with our hypothesis we find that increased severity of corruption during training does increase performance on data corrupted with the same aspect type but lesser severity. TAB6 presents numeric scores for the training setting in Q&A BiDAF ELMo models, while FIG3 shows plots for multiple models. In all scenarios, we test on Qwerty 1 and Qwerty 5 corruptions. Interestingly, in the case of NER models, obtained on models trained on both corruption types are even better than for the original model (for Qwerty 5 model, this behavior is consistent across levels of severity of test data perturbations).Empirically, the severity of Qwerty perturbation (and others) does make the text unintelligible for humans at some point. For example, this boundary was found to be level 5 for Q&A questions. However, the Q&A BiDAF ELMo model trained on Qwerty 5 performs reasonably well even at severity level 8. This suggests that the model learned to decode this corruption even beyond human ability. Relation between corruption types. To verify relations between performance of models trained and tested on various corruption types, we test models trained on Qwerty corruption with severity 1 and 5. Qwerty exhibits similarities to Swap and Remove char types, since all of them imply manipulations of word characters. We observe that Bidaf ELMo and NER ELMo models trained on Qwerty and tested on similar aspects perform better than original models not trained in adversarial setting. Results are depicted in FIG4. In this work, we have presented the WildNLP framework for corruption robustness testing. We have introduced 11 text corruption types (at various severity levels) which can occur naturally in model deployment setting: misspellings, keyboard errors, attempts at masking emotional language, and others. We test on four NLP areas and 12 models in total, verifying corruption robustness of state-of-the-art BERT system and new LM-based embeddings: ELMo and Flair, contrasted with GloVe and Fasttext. We find that the problem of lacking corruption robustness is not solved by these recent systems. However, we find that the issue can be partially alleviated by adversarial training, even across aspects. We believe that problem of adversarial examples in NLP is still vague and hard to quantify. Without doubt, more work is needed to make models robust to natural noise, whether by robust word embeddings, model architectures, or better datasets.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkxgBPr3iN
We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs.
Training generative models like Generative Adversarial Network (GAN) is challenging for noisy data. A novel curriculum learning algorithm pertaining to clustering is proposed to address this issue in this paper. The curriculum construction is based on the centrality of underlying clusters in data points. The data points of high centrality takes priority of being fed into generative models during training. To make our algorithm scalable to large-scale data, the active set is devised, in the sense that every round of training proceeds only on an active subset containing a small fraction of already trained data and the incremental data of lower centrality. Moreover, the geometric analysis is presented to interpret the necessity of cluster curriculum for generative models. The experiments on cat and human-face data validate that our algorithm is able to learn the optimal generative models (e.g. ProGAN) with respect to specified quality metrics for noisy data. An interesting finding is that the optimal cluster curriculum is closely related to the critical point of the geometric percolation process formulated in the paper. Deep generative models have piqued researchers' interest in the past decade. The fruitful progress has been achieved on this topic, such as auto-encoder and variational auto-encoder (VAE) , generative adversarial network (GAN) (;, normalizing flow (; ;), and autoregressive models (van den b; a; . However, it is non-trivial to train a deep generative model that can converge to a proper minimum of the associated optimization. For example, GAN suffers non-stability, mode collapse, and generative distortion during training. Many insightful algorithms have been proposed to circumvent those issues, including feature engineering , various discrimination metrics , distinctive gradient penalties , spectral normalization to discriminator , and orthogonal regularization to generator . What is particularly of interest is that the breakthrough for GANs has been made with a simple technique of progressively growing neural networks of generators and discriminators from low-resolution images to high-resolution counterparts (a). This kind of progressive growing also helps push the state of the arts to a new level by enabling StyleGAN to produce photo-realistic and detail-sharp (b), shedding new light on wide applications of GANs in solving real problems. This idea of progressive learning is actually a general manner of cognition process , which has been formally named curriculum learning in machine learning . The central topic of this paper is to explore a new curriculum for training deep generative models. To facilitate robust training of deep generative models with noisy data, we propose curriculum learning with clustering. The key contributions are listed as follows: • We first summarize four representative curricula for generative models, i.e. architecture (generation capacity), semantics (data content), dimension (data space), and cluster (data structure). Among these curricula, cluster curriculum is newly proposed in this paper. • Cluster curriculum is to treat data according to the centrality of each data point, which is pictorially illustrated and explained in detail. To foster large-scale learning, we devise the active set algorithm that only needs an active data subset of small fixed size for training. • The geometric principle is formulated to analyze hardness of noisy data and advantage of cluster curriculum. The geometry pertains to counting a small sphere packed in an ellipsoid, on which is based the percolation theory we use. The research on curriculum learning is diverse. Our work focuses on curricula that are closely related to data attributes, beyond which is not the scope we concern in this paper. Curriculum learning has been a basic learning approach to promoting performance of algorithms in machine learning. We quote the original words from the seminal paper as its definition: Curriculum learning. " The basic idea is to start small, learn easier aspects of the task or easier sub-tasks, and then gradually increase the difficulty level" according to pre-defined or self-learned curricula. From cognitive perspective, curriculum learning is common for human and animal learning when they interact with environments , which is the reason why it is natural as a learning rule for machine intelligence. The learning process of cognitive development is gradual and progressive . In practice, the design of curricula is task-dependent and data-dependent. Here we summarize the representative curricula that are developed for generative models. Architecture curriculum. The deep neural architecture itself can be viewed as a curriculum from the viewpoint of learning concepts or disentangling representations . For example, the different layers decompose distinctive features of objects for recognition (; ;) and generation . Besides, Progressive growing of neural architectures is successfully exploited in GANs (a; ; ; b). Semantics curriculum. The most intuitive content for each datum is the semantic information that the datum conveys. The hardness of semantics determines the difficulty of learning knowledge from data. Therefore, the semantics can be a common curriculum. For instance, the environment for a game in deep reinforcement learning and the number sense of learning cognitive concepts with neural networks can be such curricula. Dimension curriculum. The high dimension usually poses the difficulty of machine learning due to the curse of dimensionality , in the sense that the amount of data points for learning grows exponentially with dimension of variables . Therefore, the algorithms are expected to be beneficial from growing dimensions. The effectiveness of dimension curriculum is evident from recent progress on deep generative models, such as ProGANs (a; b) by gradually enlarging image resolution and language generation from short sequences to long sequences of more complexity . For fitting distributions, dense data points are generally easier to handle than sparse data or outliers. To train generative models robustly, therefore, it is plausible to raise cluster curriculum, meaning that generative algorithms first learn from data points close to cluster centers and then with more data progressively approaching cluster boundaries. Thus the stream of feeding data points to models for curriculum learning is the process of clustering data points according to cluster centrality that will be explained in section 3.2. The toy example in Figure 1 illustrates how to form cluster curriculum. The importance of clusters for data points is actually obvious from geometric point of view. The data sparsity in high-dimensional spaces causes the difficulty of fitting the underlying distribution of n = 100 n = 200 n = 300 n = 400 Figure 1: Cluster Curriculum. From magenta color to black color, the centrality of data points reduces. The value n is the number of data points taken with centrality order. data points . So generative algorithms may be beneficial when proceeding from the local spaces where data points are relatively dense. Such data points form clusters that are generally informative subsets with respect to the entire dataset. In addition, clusters contain common regular patterns of data points, where generative models are easier to converge. What is most important is that noisy data points deteriorate performance of algorithms. For classification, the effectiveness of curriculum learning is theoretically proven to circumvent the negative influence of noisy data . We will analyze this aspect for generative models with geometric facts. With cluster curriculum, we are allowed to gradually learn generative models from dense clusters to cluster boundaries and finally to all data points. In this way, generative algorithms are capable of avoiding the direct harm of noise or outliers. To this end, we first need a measure called centrality that is the terminology in graph-based clustering. It quantifies the compactness of a cluster in data points or a community in complex networks . A large centrality implies that the associated data point is close to one of cluster centers. For easy reference, we provide the algorithm of the centrality we use in Appendix. For experiments in this paper, all the cluster curricula are constructed by the centrality of stationary probability distribution, i.e. the eigenvector corresponding to the largest eigenvalue of the transition probability matrix drawn from the data. To be specific, let c ∈ R m denote the centrality vector of m data points. Namely, the i-th entry c i of c is the centrality of data point x i. Sorting c in descending order and adjusting the order of original data points accordingly give data points arranged by cluster centrality. Let.., − → X l } signify the set of centrality-sorted data points, where − → X 0 is the base set that contains sufficient data to attain convergent generative models, and the rest of − → X is evenly divided into l subsets according to centrality order. In general, the number of data points in − → X 0 is much less than m and determined according to X. Such division of − → X 0 serves to efficiency of training, because we do not need to train models from a very small dataset. The cluster curriculum learning is carried out by incrementally feeding subsets in − → X into generative algorithms. In other words, algorithms are successively trained on, meaning that the curriculum for each round of training is accumulated with − → X i. In order to determine the optimal curriculum we need the aid of quality metric of generative models, such as Fréchet inception distance (FID) or sliced Wasserstein distance (SWD) . For generative models trained with each curriculum, we calculate the associated score s i via the specified quality metric. The optimal curriculum for effective training can be identified by the minimal value for all s i, where i = 1,..., l + 1. The interesting phenomenon of this score curve will be illustrated in the experiment. The minimum of score s is apparently metric-dependent. One can refer to for the review of evaluation metrics. In practice, we can opt one of reliable metrics to use or multiple metrics for decision-making of the optimal model. There are two ways of using the incremental subset − → X i during training. One is that the parameters of models are re-randomized when the new data are used, the procedure of which is given in Algorithm 1 in Appendix. The other is that the parameters are fine-tuned based on pre-training of the previous model, which will be presented with a fast learning algorithm in the following section. To obtain the precise minimum of s, the cardinality of − → X i needs to be set much smaller than m, meaning that l will be large even for a dataset of moderate scale. The training of many loops will be time-consuming. Here we propose the active set to address the issue, in the sense that for each loop of cluster curriculum, the generative models are always trained with a subset of a small fixed size instead of − → X 0 ← − → X 0 ∪ − → X i whose size becomes incrementally large. (a) Figure 2: Schematic illustration of active set for cluster curriculum. The cardinality |A| of the active set A is 200. When − → X 2 is taken for training, we need to randomly sample another 100 (i.e. − → X 2|) data points from the history data Then the complete active set is composed by We can see that data points in become less dense after sampling. To form the active set A, the subset − → A 0 of data points are randomly sampled from − → X 0 to combine with − → X i for the next loop, where For easy understanding, we illustrate the active set with toy example in Figure 2. In this scenario, progressive pre-training must be applied, meaning that the update of model parameters for the current training is based on parameters of previous loop. The procedure of cluster curriculum with active set is detailed in Algorithm 2 in Appendix. The active set allows us to train generative models with a small dataset that is actively adapted, thereby significantly reducing the training time for large-scale data. Cluster curriculum bears the interesting relation to high-dimensional geometry, which can provide geometric understanding of our algorithm. Without loss of generality, we work on a cluster obeying the normal distribution. The characteristic of the cluster can be extended into other clusters of the same distribution. For easy analysis, let us begin with a toy example. As Figure 3(a) shows, the confidence ellipse E 2 fitted from the subset of centrality-ranked data points is nearly conformal to E 1 of all data points, which allows us to put the relation of these two ellipses by virtue of the confidence-level equation. Let N (0, Σ) signify the center and covariance matrix of the cluster C of interest, where C = {x i |x i ∈ R d, i = 1, . . ., n}. To make it formal, we can write the equation by The annulus formed by of removing the inner ellipse from the outer one. where χ To analyze the hardness of training generative models, a fundamental aspect is to examine the number n(E) of given data points falling in a geometric entity E 1 and the number N (E) of lattice points in it. The less n(E) is compared to N (E), the harder the problem will be. However, the enumeration of lattice points is computationally prohibitive for high dimensions. Inspired by the information theory of encoding data of normal distributions , we count the number of small spheres S ε of radius ε packed in the ellipsoid E instead. Thus we can use this number to replace the role of N (E) as long as the radius of the sphere S ε is set properly. With a little abuse of normal d=2 d=20 d=50 d=100 d=500 d=1000 lattice d=2 d=20 d=50 d=100 d=500 d=1000 Figure 4: Comparison between the number n(A) of data points sampled from the isotropic normal distributions and N (A) of spheres (lattice) packed in the annulus A with respect to the Chi quantile χ α2. d is the dimension of data points. For each dimension, we sample 70,000 data points from N (0, I). The scales of y-axis and x-axis are normalized by 10,000 and χ α1, respectively. notation, we still use N (E) to denote the packing number in the following context. Theorem 1 gives the exact form of N (E). Theorem 1. For a set C = {x i |x i ∈ R d} of n data points drawn from normal distribution N (0, Σ), the ellipsoid E α of confidence 1 − α is defined as x Σ −1 x ≤ χ 2 α, where Σ has no zero eigenvalues and α ∈. Let N (E α) be the number of spheres of radius ε packed in the ellipsoid E α. Then we can establish We can see that N (E α) admits a tidy form with Mahalanobis distance χ α, dimension d, and sphere radius ε as variables. The proof is provided in Appendix. The geometric region of interest for cluster curriculum is the annulus A formed by removing the ellipsoid 2 E α2 from the ellipsoid E α1, as Figure 3 (b) displays. We investigate the varying law between n(A) and N (A) in the annulus A when the inner ellipse E α2 grows with cluster curriculum. For this purpose, we need the following two corollaries that immediately follows from Theorem 1. Corollary 1. Let N (A) be the number of spheres of radius ε packed in the annulus A that is formed by removing the ellipsoid E α1 from the ellipsoid E α1, where α 1 ≤ α 2. Then we have It is obvious that N (A) goes infinite when d → ∞ under the conditions that χ α1 > χ α2 and ε is bounded. Besides, when E α2 (cluster) grows, N (A) reduces with exponent d if E α1 is fixed. In light of Corollary 1, we can now demonstrate the functional law between n(A) and N (A). First, we determine χ α1 as follows which means that E α1 is the ellipsoid of minimal Mahalanobis distance to the center that contains all the data points in the cluster. In addition, we need to estimate a suitable sphere radius ε, such that n(E α1) and N (E α1) have comparable scales in order to make n(A) and N (A) comparable in scale. To achieve this, we define an oracle ellipse E where n(E) = N (E). For simplicity, we let E α1 be the oracle ellipse. Thus we can determine ε with Corollary 3. Corollary 3. If we let E α1 be the oracle ellipse such that n(E α1) = N (E α1), then the free parameter ε can be computed with ε = χ α1 det(Σ)/n(E α1) To make the demonstration amenable to handle, data points we use for simulation are assumed to obey the isotropic normal distribution, meaning that data points are generated with nearly equal Figure 5: Examples of LSUN cat dataset and CelebA face dataset. The samples in the first row are of high centrality and the samples of low centrality in the second row are noisy data or outliers that we call in the context. variance along each dimension. Figure 4 shows that n(A) gradually exhibits the critical phenomena of percolation processes 3 when the dimension d goes large, implying that the data points in the annulus A are significantly reduced when E α2 grows a little bigger near the critical point. In contrast, the number N (A) of lattice points is still large and varies negligibly until E α2 approaches the boundary. This discrepancy indicates clearly that fitting data points in the annulus is pretty hard and guaranteeing the precision is nearly impossible when crossing the critical point of n(A) even for a moderate dimension (e.g. d = 500). Therefore, the plausibility of cluster curriculum can be drawn naturally from this geometric fact. The generative model that we use for experiments are Progressive growing of GAN (ProGAN) (a). This algorithm is chosen because ProGAN is the state-of-the-arts algorithm of GANs with official open sources available. According to convention, we opt the Fréchet inception distance (FID) for ProGAN as the quality metric. We randomly sample the 200,000 cat images from the LSUN dataset . These cat images are captured in the wild. So their styles vary significantly. Figure 5 shows the cat examples of high and low centralities. We can see that the noisy cat images differ much from the clean ones. There actually contain the images of very few informative cat features, which are the outliers we refer to. The curriculum parameters are set as | − → X 0 | = 20, 000 and | − → X i | = 10, 000, which means that the algorithms are trained with 20,000 images first and after the initial training, another 10,000 images according to centrality order are merged into the current training data for further re-training. For active set, its size is fixed to be 30, 000. The CelebA dataset is a large-scale face attribute dataset . We use the cropped and well-aligned faces with a bit of image s preserved for generation task. For clustercurriculum learning, we randomly sample 70,000 faces as the training set. The face examples of different centralities are shown in Figure 5. The curriculum parameters are set as | − → X 0 | = 10, 000 and | − → X i | = 5, 000. We bypass the experiment of the active set on faces because it is used for the large-scale data. Each image in two databases is resized to be 64 × 64. To form cluster curricula, we exploit ResNet34 pre-trained on ImageNet to extract 512-dimensional features for each face and cat images. The directed graphs are built with these feature vectors. We determine the parameter σ of edge weights by enforcing the geometric mean of weights to be 0.8. The robustness of varying the value was validated in for clustering. The number of nearest neighbors is set to be K = 4 log m. The centrality is the stationary probability distribution. All codes are written with TensorFlow.: FID curves of cluster-curriculum learning for ProGAN on the cat dataset and CelebA face dataset. The centrality and the FID share the x-axis due to that they have the same order of data points. The same colors of the y-axis labels and the curves denote the figurative correspondence. The network parameters for "normal training" are randomly re-initialized for each re-training. The active set is based on progressive pre-training of the fixed small dataset. The scale of the x-axis is normalized by 10,000. From Figure 6a, we can see that the FID curves are all nearly V-shaped, indicating that the global minima exist amid the training process. This is the clear evidence that the noisy data and outliers deteriorate the quality of generative models during training. From the optimal curricula found by two algorithms (i.e. curricula at 110,000 and 100,000), we can see that the curriculum of the active set differs from that of normal training only by one-step data increment, implying that the active set is reliable for fast cluster-curriculum learning. The performance of the active set measured by FID is much worse than that of normal training, especially when more noisy data are fed into generative models. However, this does not change the whole V-shape of the accuracy curve. Namely, it is applicable as long as the active set admits the metric minimum corresponding to the appropriate curriculum. The V-shape of the centrality-FID curve on the cat data is due to that the noisy data of low centrality contains little effective information to characterize the cats, as already displayed in Figure 5. However, it is different for the CelebA face dataset where the face images of low centrality also convey the part of face features. As evident by Figure 6b, ProGAN keeps being optimized by the majority of the data until the curriculum of size 55, 000. To highlight the meaning of this nearly negligible minimum, we also conduct the exactly same experiment on the FFHQ face dataset containing 70, 000 face images of high-quality (b). For FFHQ data, the noisy face data can be ignored. The gray curve of normal training in Figure 6b indicates that the FID of ProGAN is monotonically decreased for all curricula. This gentle difference of the FID curves at the ends between CelebA and FFHQ clearly demonstrates the difficulty of noisy data to generative algorithms. To understand cluster curriculum deeply, we employ the geometric method formulated in section 4 to analyze the cat and face data. The percolation processes are both conducted with 512-dimensional features from ResNet34. Figure 7 displays the curve of n(A) that is the variable of interest in this scenario. As expected, the critical point in the percolation process occurs for both cases, as shown by blue curves. An obvious fact is that the optimal curricula (red strips) both fall into the (feasible) domains of percolation processes after the critical points, as indicated by gray color. This is a desirable property because data become rather sparse in the annuli when crossing the critical points. Then noisy data play the non-negligible role on tuning the parameters of generative models. Therefore, a fast learning strategy can be derived from the percolation process. Training may begin from the curriculum specified by the critical point, thus significantly accelerating cluster-curriculum learning. The pink strips are intervals of optimal curricula derived by generative models. For example, the value 9 of the pink interval in (a) is obtained by 9 = 20 − 11, where 11 is one of the minima (i.e. 110,000) in Figure 6a. The others are derived in the same way. The subtraction transforms the data number in the cluster to be the one in the annulus. The critical points are determined by searching the maxima of the absolute discrete difference of the associated curves. The scales of y-axes are normalized by 10,000. Another intriguing phenomenon is that the more noisy the data, the closer the optimal interval (red strip) is to the critical point. We can see that the optimal interval of the cat data is much closer to the critical point than that of the face data. What surprises us here is that the optimal interval of cluster curricula associated with the cat data nearly coincides with the critical point of the percolation process in the annulus! This means that the optimal curriculum may be found at the intervals close to the critical point of of n(A) percolation for heavily noisy data, thus affording great convenience to learning an appropriate generative model for such datasets. Cluster curriculum is proposed for robust training of generative models. The active set of cluster curriculum is devised to facilitate scalable learning. The geometric principle behind cluster curriculum is analyzed in detail as well. The experimental on the LSUN cat dataset and CelebA face dataset demonstrate that the generative models trained with cluster curriculum is capable of learning the optimal parameters with respect to the specified quality metric such as Fréchet inception distance and sliced Wasserstein distance. Geometric analysis indicates that the optimal curricula obtained from generative models are closely related to the critical points of the associated percolation processes established in this paper. This intriguing geometric phenomenon is worth being explored deeply in terms of the theoretical connection between generative models and high-dimensional geometry. It is worth emphasizing that the meaning of model optimality refers to the global minimum of the centrality-FID curve. As we already noted, the optimality is metric-dependent. We are able to obtain the optimal model with cluster curriculum, which does not mean that the algorithm only serves to this purpose. We know that more informative data can help learn a more powerful model covering the large data diversity. Here a trade-off arises, i.e. the robustness against noise and the capacity of fitting more data. The centrality-FID curve provides a visual tool to monitor the state of model training, thus aiding us in understanding the learning process and selecting suitable models according to noisy degree of given data. For instance, we can pick the trained model close to the optimal curriculum for heavily noisy data or the one near the end of the centrality-FID curve for datasets of little noise. In fact, this may be the most common way of using cluster curriculum. In this paper, we do not investigate the cluster-curriculum learning for the multi-class case, e.g. the ImageNet dataset with BigGAN . The cluster-curriculum learning of multiple classes is more complex than that we have already analyzed on the face and cat data. We leave this study for future work. The centrality or clustering coefficient pertaining to a cluster in data points or a community in a complex network is a well-studied traditional topic in machine learning and complex systems. Here we introduce the graph-theoretic centrality for the utilization of cluster curriculum. Firstly, we construct a directed graph (digraph) with K nearest neighbors by the method in . The weighted adjacency matrix W of the digraph can be formed in this way: 2 ) if x j is one of the nearest neighbors of x i and 0 otherwise, where d ij is the distance between x i and x j and σ is a free parameter. The density of data points can be quantified with the stationary probability distribution of a Markov chain. For a digraph built from data, the transition probability matrix can be derived by row normalization, say, P ij = W ij / j W ij. Then the stationary probability u can be obtained by solving an eigenvalue problem P u = u, where denotes the matrix transpose. It is straightforward to know that u is the eigen-vector of P corresponding to the largest eigenvalue (i.e. 1). u is also defined as a kind of PageRank in many scenarios. For density-based cluster curriculum, the centrality c coincides with the stationary probability u. Figure 1 in the main context shows the plausibility of using the stationary probability distribution to quantify the data density. Then we derive the length of semi-axis with respect to χ α, i.e. For a d-dimensional ellipsoid E, the volume of E is where r i the leng of semi-axis of E and Γ(·) is the Gamma function. Substituting into the above equation, we obtain the final formula of volume Using the volume formula in, it is straightforward to get the volume of the packing sphere S ε By the definition of N (E α), we can write We conclude the proof of the theorem.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BklTQCEtwH
A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models.
Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded. While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities. In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) --- a novel threat assessment framework developed by fully exploiting the distributed nature of FL. DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively. Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data. We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings. Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors. We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking. To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval. Our proposed DBA and thorough evaluation shed lights on characterizing the robustness of FL. Federated learning (FL) has been recently proposed to address the problems for training machine learning models without direct access to diverse training data, especially for privacy-sensitive tasks (; ;). Utilizing local training data of participants (i.e., parties), FL helps train a shared global model with improved performance. There have been prominent applications and ever-growing trends in deploying FL in practice, such as loan status prediction, health situation assessment (e.g. potential cancer risk assessment), and next-word prediction while typing (; ; 2019). Although FL is capable of aggregating dispersed (and often restricted) information provided by different parties to train a better model, its distributed learning methodology as well as inherently heterogeneous (i.e., non-i.i.d.) data distribution across different parties may unintentionally provide a venue to new attacks. In particular, the fact of limiting access to individual party's data due to privacy concerns or regulation constraints may facilitate backdoor attacks on the shared model trained with FL. Backdoor attack is a type of data poisoning attacks that aim to manipulate a subset of training data such that machine learning models trained on the tampered dataset will be vulnerable to the test set with similar trigger embedded . Backdoor attacks on FL have been recently studied in . However, current attacks do not fully exploit the distributed learning methodology of FL, as they embed the same global trigger pattern to all adversarial parties. We call such attacking scheme Figure 1: Overview of centralized and distributed backdoor attacks (DBA) on FL. The aggregator at round t + 1 combines information from local parties (benign and adversarial) in the previous round t, and update the shared model G t+1. When implementing backdoor attacks, centralized attacker uses a global trigger while distributed attacker uses a local trigger which is part of the global one. centralized backdoor attack. Leveraging the power of FL in aggregating dispersed information from local parties to train a shared model, in this paper we propose distributed backdoor attack (DBA) against FL. Given the same global trigger pattern as the centralized attack, DBA decomposes it into local patterns and embed them to different adversarial parties respectively. A schematic comparison between the centralized and distributed backdoor attacks is illustrated in Fig.1. Through extensive experiments on several financial and image datasets and in-depth analysis, we summarize our main contributions and findings as follows. • We propose a novel distributed backdoor attack strategy DBA on FL and show that DBA is more persistent and effective than centralized backdoor attack. Based on extensive experiments, we report a prominent phenomenon that although each adversarial party is only implanted with a local trigger pattern via DBA, their assembled pattern (i.e., global trigger) attains significantly better attack performance on the global model compared with the centralized attack. The are consistent across datasets and under different attacking scenarios such as one-time (single-shot) and continuous (multiple-shot) poisoning settings. To the best of our knowledge, this paper is the first work studying distributed backdoor attacks. • When evaluating the robustness of two recent robust FL methods against centralized backdoor attack , we find that DBA is more effective and stealthy, as its local trigger pattern is more insidious and hence easier to bypass the robust aggregation rules. • We provide in-depth explanations for the effectiveness of DBA from different perspectives, including feature visual interpretation and feature importance ranking. • We perform comprehensive analysis and ablation studies on several trigger factors in DBA, including the size, gap, and location of local triggers, scaling effect in FL, poisoning interval, data poisoning ratio, and data distribution. Specifically, at round t, the central server sends the current shared model G t to n ∈ [N] selected parties, where [N] denotes the integer set {1, 2, . . ., N}. The selected party i locally computes the function f i by running an optimization algorithm such as stochastic gradient descent (SGD) for E local epochs with its own dataset D i and learning rate l r to obtain a new local model L t+1 i. The local party then sends model update L t+1 i − G t back to the central server, who will averages over all updates with its own learning rate η to generate a new global model G t+1: This aggregation process will be iterated until FL finds the final global model. Unless specified otherwise, we use G t (L t i) to denote the model parameters of the global (local) model at round t. Attacker ability. Based on the Kerckhoffs's theory , we consider the strong attacker here who has full control of their local training process, such as backdoor data injection and updating local training hyperparameters including E and l r. This scenario is quite practical since each local dataset is usually owned by one of the local parties. However, attackers do not have the ability to influence the privilege of central server such as changing aggregation rules, nor tampering the training process and model updates of other parties. Objective of backdoor attack. Backdoor attack is designed to mislead the trained model to predict a target label τ on any input data that has an attacker-chosen pattern (i.e., a trigger) embedded. Instead of preventing the convergence in accuracy as Byzantine attacks , the purpose of backdoor attacks in FL is to manipulate local models and simultaneously fit the main task and backdoor task, so that the global model would behave normally on untampered data samples while achieving high attack success rate on backdoored data samples. The adversarial objective for attacker i in round t with local datatset D i and target label τ is: Here, the poisoned dataset The function R transforms clean data in any class into backdoored data that have an attacker-chosen trigger pattern using a set of parameters φ. For example, for image data, φ is factored into trigger location TL, trigger size TS and trigger gap TG (φ = {TS, TG, TL}), which are shown in Fig.2. The attacker can design his own trigger pattern and choose an optimal poison ratio r to in a better model parameter w * i, with which G t+1 can both assign the highest probability to target label τ for backdoored data R(x i j, φ) and the ground truth label y i j for benign data x i j. We again use Fig.1 to illustrate our proposed DBA in details. Recall that current centralized attack embeds the same global trigger for all local attackers 1 . For example, the attacker in Fig.1.(a) embeds the training data with the selected patterns highlighted by 4 colors, which altogether constitutes a complete global pattern as the backdoor trigger. In our DBA, as illustrated in Fig.1.(b), all attackers only use parts of the global trigger to poison their local models, while the ultimate adversarial goal is still the same as centralized attack -using the global trigger to attack the shared model. For example, the attacker with the orange sign poisons a subset of his training data only using the trigger pattern located at the orange area. Similar attacking methodology applies to green, yellow and blue signs. We define each DBA attacker's trigger as the local trigger and the combined whole trigger as the global trigger. For fair comparison, we keep similar amount of total injected triggers (e.g., modified pixels) for both centralized attack and DBA. In centralized attack, the attacker tries to solve the optimization problem in Eq.2 without any coordination and distributed processing. In contrast, DBA fully exploits the distributed learning and local data opacity in FL. Considering M attackers in DBA with M small local triggers. Each DBA attacker m i independently performs the backdoor attack on their local models. This novel mechanism breaks a centralized attack formulation into M distributed sub-attack problems aiming to solve where φ * i = {φ, O(i)} is the geometric decomposing strategy for the local trigger pattern of attacker m i and O(i) entails the trigger decomposition rule for m i based on the global trigger φ. DBA attackers will poison with the poison round interval I and use the scale factor γ to manipulate their updates before submitting to the aggregator. We will explain the related trigger factors in the next subsection. We note that although none of the adversarial party has ever been poisoned by the global trigger under DBA, we find that DBA indeed outperforms centralized attack significantly when evaluated with the global trigger. With the framework of DBA on FL, there are multiple new factors to be explored. Here we introduce a set of trigger factors that we find to be critical. Fig.2 explains the location, size and gap attribute of triggers in image dataset. For simplicity, we set all of our local triggers to the same rectangle shape 2. Fig.3 explains our trigger attribute of ranked feature importance in tabular data (e.g., the loan dataset). Trigger Size TS: the number of pixel columns (i.e., the width) of a local distributed trigger. Trigger Gap TG: the distance of the Gap x and Gap y, which represent the distance between the left and right, as well as the top and bottom local trigger, respectively. Trigger Location TL: (Shif t x, Shif t y) is the offset of the trigger pattern from the top left pixel. Scale γ: the scaling parameter γ = η/N defined in is used by the attacker to scale up the malicious model weights. 3 For instance, assume the ith malicious local model is X. The new local model L t+1 i that will be submitted is calculated as L t+1 i = γ(X − G t) + G t. Poison Ratio r: the ratio controls the fraction of backdoored samples added per training batch. Note that larger r should be preferable when attacking intuitively, and there is a tradeoff between clean data accuracy and attack success rate, but too large r would also hurt the attack effectiveness once the model becomes useless. Poison Interval I: the round intervals between two poison steps. For example, I = 0 means all the local triggers are embedded within one round, while I = 1 means the local triggers are embedded in consecutive rounds. Data Distribution: FL often presumes non-i.i.d. data distribution across parties. Here, we use a Dirichlet distribution with different hyperparameter α to generate different data distribution following the setups in . DBA is evaluated on four classification datasets with non-i.i.d. data distributions: Lending Club Loan Data(LOAN) , MNIST, CIFAR-10 and Tiny-imagenet. The data description and parameter setups are summarized in Tb.1. We refer the readers to Appendix A.1 for more details. Following the standard setup, we use SGD and trains for E local epochs with local learning rate l r and batch size 64. A shared global model is trained by all participants, 10 of them are selected in each round for aggregation. The local and global triggers used are summarized in Appendix A.1. 2 Some factor definitions may not apply to non-image data, which will be clarified accordingly. 3 In our implementation, every distributed attacker uses the same γ. Following the attack analysis in , we evaluate multiple-shot attack (Attack A-M) and single-shot attack (Attack A-S) two attack scenarios, which are called naive approach and model replacement respectively in the original paper. • Attack A-M means the attackers are selected in multiple rounds and the accumulated malicious updates are necessary for a successful attack; otherwise the backdoor would be weakened by benign updates and soon forgotten by the global model. In order to quickly observe the difference between centralized and distributed attacks and control the effect of random party selection, we perform a complete attack in every round, that is, all DBA attackers or centralized attackers are consistently selected. Benign participants are randomly selected to form a total of 10 participants. • Attack A-S means that every DBA attacker or the centralized attacker only needs one single shot to successfully embed its backdoor trigger. To achieve that, the attacker performs scaling in their malicious updates to overpower other benign updates and ensure that the backdoor survives the aggregation step. For fair comparison, DBA and centralized attack finish a complete backdoor in the same round. Take MNIST as an example, DBA attackers separately embed their local triggers in round 12, 14, 16, 18 for local triggers 1 to 4, while the centralized attacker implants its global trigger in round 18. Benign participants are randomly selected to form a total of 10 participants. These two scenarios reveal different aspects of DBA and centralized backdoor attacks when the global model is triggered by local and global triggers. Attack A-M studies how easy the backdoor is successfully injected while Attack A-S studies how fast the backdoor effect diminishes. In our experiments, we evaluate the attack success rates of DBA and centralized attacks using the same global trigger. For fair comparison, we make sure the total number of backdoor pixels of DBA attackers is close to and even less than that of the centralized attacker (it is hard to control them to be the same due to data sampling with certain distribution). The ratio of the global trigger of DBA pixels to the centralized is 0.992 for LOAN, 0.964 for MNIST, 0.990 for CIFAR and 0.991 for Tiny-imagenet. Moreover, in order to avoid the influence of the original label when testing attack success rate, we remove the test data whose true label equals to the backdoor target label. In three image datasets, we begin to attack when the main accuracy of global model converges, which is round 10 for MNIST, 200 for CIFAR, 20 for Tiny-imagenet in Attack A-M. The reason is provided in Appendix. A.2. The global learning rate η in Attack A-M is 0.1 for CIFAR, 1 for others and in Attack A-S is 0.1 for all datasets. In Attack A-M, the attack success rate of DBA is always higher than centralized attack in all cases as shown in Fig.4. DBA also converges faster and even yields a higher attack success rate in MNIST. Under DBA, we find a prominent phenomenon that the attack success rate of the global trigger is higher than any local trigger even if the global trigger never actually appears in any local training dataset. Moreover, the global trigger converges faster in attack performance than local triggers. Centralized attacker embeds the whole pattern so its attack success rate of any local triggers is low. Due to the continuous poisoning, the attack rate on local triggers still increases for LOAN but this phenomenon does not appear in MNIST and Tiny-imagenet, which indicates that the success of global trigger does not require the same success for local triggers. The also suggest that DBA can lead to high attack success rate for the global trigger even when some of its local triggers only attain low attack success rates. This finding is unique for DBA and also implies the inefficiency of centralized attack on FL. In Attack A-S, DBA and centralized attack both reach a high attack success rate after performing a complete backdoor in all datasets with a scale factor γ = 100 as shown in Fig.4. In the consecutive rounds, the backdoor injected into the global model is weakened by benign updates so the attack success rate gradually decreases. There is an exception that centralized attack in CIFAR suffers from the initial drop and then rises slowly, which is caused by the high local learning rate of benign participants and is also observed in . We also find that the attack success rate of centralized attack in local triggers and the global trigger drops faster than that of DBA, which shows that DBA yields a more persistent attack. For example, in MNIST and after 50 rounds, DBA remains 89% attack success rate while centralized attack only gets 21%. Although DBA performs data poisoning only using local triggers, the show that its global trigger lasts longer than any local triggers, which suggests DBA can make the global trigger more resilient to benign updates. RFA and FoolsGold are two recently proposed robust FL aggregation algorithms based on distance or similarity metrics, and in particular RFA is claimed to be able to detect more nuanced outliers which goes beyond the worst-case of the Byzantine setting . In addition, as Attack A-S is more easily detected due to the scaling operation , we will focus on evaluating the attack effectiveness of DBA and centralized backdoor attacks against both RFA and FoolsGold under Attack A-M setting. Distributed Attack against Robust Aggregation Defence. RFA aggregates model parameters for updates and appears robust to outliers by replacing the weighted arithmetic mean in the aggregation step with an approximate geometric median. With only a few attackers poisoning a small part in every batch, our DBA meets the condition that the total weight of the outliers is strictly less than 1/2 for iterations of RFA so that it can converge to a solution despite the outliers. The maximum iteration of RFA is set to be 10 while in fact it converges rapidly, which can give a high-quality solution within about 4 iterations. Fig.5 shows the attack performance of DBA and centralized attack under RFA. For Tiny-imagenet, the centralized attack totally fails at least 80 rounds but the DBA attackers with lower distances and higher aggregation weights can perform a successful backdoor attack. For MNIST and CIFAR, the attack success rate of DBA is much higher and the convergence speed is much faster. For LOAN, centralized backdoor attack takes more than 20 rounds to converge than DBA. To explain the effectiveness of DBA, we calculate the Euclidean norm between attacker's model parameter updates and the final geometric median as a distance metric. As shown in Tb.2 in Appendix, the malicious updates submitted by DBA attackers have lower distances than that of the centralized attacker's updates in all datasets, which help them to better bypass the defense. FoolsGold reduces aggregation weights of participating parties that repeatedly contribute similar gradient updates while retaining the weights of parities that provide different gradient updates . Fig.5 shows that DBA also outperforms centralized attack under FoolsGold. In three image datasets, the attack success rate of DBA is notably higher while converging faster. DBA in MNIST reaches 91.55% in round 30 when centralized attack fails with only 2.91% attack success rate. For LOAN, which are trained with a simple network, FoolsGolds cannot distinguish the difference between the malicious and clean updates and assigns high aggregation weights for attackers, leading to a fast backdoor success. To explain the effectiveness of DBA, we report FoolsGold's weights on adversarial parties in Tb.2 in Appendix. Comparing to centralized attack, although FoolsGold assigns smaller aggregation weights to DBA attacker due to their similarity of backdoor target label, DBA is still more successful. This is because the sum of weights of distributed attackers could be larger than centralized attacker. Feature importance can be calculated by various classification tools or visually interpreted by classspecific activation maps. For example, in LOAN we show that the top features identified by different classifiers are quite consistent (see Tb.4 in Appendix). Here we use Grad-CAM and Soft Decision Tree to provide explanations for DBA. More details about Soft Decision Tree trained on our datasets are discussed in Appendix A.7. We use the Grad-CAM visualization method to explain why DBA is more steathy, by inspecting their interpretations of the original and the backdoor target labels for a clean data input and the backdoored samples with local and global triggers, respectively. Fig.6 shows the Grad-CAM of a hand-written digit'4'. We find that each locally triggered image alone is a weak attack as none of them can change the prediction (no attention on the top left corner where the trigger is embedded). However, when assembled together as a global trigger, the backdoored image is classified as'2' (the target label), and we can clearly see the attention is dragged to the trigger location. The fact that Grad-CAM in most of locally triggered images are similar to the clean image, demonstrates the stealthy nature of DBA. Figure 7: Feature importance of LOAN learned from its soft decision tree Using the soft decision tree of MNIST as another example, we find that the trigger area after poisoning indeed becomes much more significant for decision making in the corresponding soft decision tree, as shown in Fig.22 in Appendix. A.7. Similar is found in LOAN. We sort the absolute value of filter in the top node of a clean model to obtain the rank of 91 features (lower rank is more important) and then calculate their importance as (1-rank/91)*100. Six insignificant features and six significant features are separately chosen to run DBA. The in Fig.7 show that based on the soft decision tree, the insignificant features become highly important for prediction after poisoning. Here we study the DBA trigger factors introduced in Sec.2.3 under Attack A-S, unless specified otherwise. We only change one factor in each experiment and keep other factors the same as in Sec.3.1. In Attack A-S, DBA-ASR shows the attack success rate while Main-Acc denotes the accuracy of the global model when the last distributed local trigger is embedded. DBA-ASR-t, which reveals the persistence, is the attack success rate of t rounds after a complete DBA is performed. Main-Acc-t is the main accuracy after t rounds. Note that in general we expect a small decrease for main task accuracy right after the DBA but will finally get back to normal after a few rounds of training. • Enlarging scale factor increases both DBA-ASR and DBA-ASR-t, and narrows the gap between them. For CIFAR, although the DBA-ASR reaches over 90% and barely changes once γ is bigger than 40, larger γ still have more positive impact on DBA-ASR-t. • For our four datasets, the more complex the model architecture (in Tb.1), the more obvious the decline in the main accuracy as γ increases, because the scaling undermines more model parameters in complex neural network. The main accuracy of LOAN doesn't drop because of simple model, while the main accuracy of Tiny-imagenet in attacking round even drops to 2.75% when γ = 110. • Larger scale factor alleviates the averaging impacts of central server for DBA, which leads to a more influential and resistant attack performance, but also cause the main accuracy of global model to descend in the attacking round for three image datasets. In addition, using large scale factor in an anomalous update that is too different from other benign updates and is easy to detect based on the magnitude of the parameters. Therefore, there is a trade-off in choosing the scale factor. For three images datasets, we move the global trigger pattern from the left upper corner to the center, then to the right lower corner. The dotted line in Fig.9 means that the trigger reaches the right boundary and starts to move along the right edges. The implementation details are in Appendix. A.9. • We observe a U-shape curve between TL and DBA-ASR (in MNIST) / DBA-ASR-t (in Tinyimagenet and MNIST). This is because the middle part in images usually contains the main object. DBA in such areas is harder to succeed and will be faster forgotten because these pixels are fundamental to the main accuracy. This finding is apparent in MNIST, where the main accuracy after 40 rounds only remains 1.45% in center (TL = 9) while has 91.57% in left upper corner (TL = 0). • Similar finding can be found in LOAN as shown in Fig.9.(a). DBA using low-importance features has higher success rate in attacking round and subsequent rounds. The low-importance trigger achieves 85.72% DBA-ASR after 20 rounds while the high-importance trigger is 0%. • In the case of four local trigger patterns located in the four corners of an image, corresponding to the maximum trigger gap in Fig.10, the DBA-ASR and DBA-ASR-t are both low in image datasets. Such failure might be caused by the local convolution operations and large distance between local triggers so that the global model cannot recognize the global trigger. • The curve of DBA-ASR and DBA-ASR-t in Fig.10.(a) has a significant drop in the middle. This happens when the right lower local trigger covers the center areas in MNIST images. Similar observations can be explained based on Fig.9 • Using zero trigger gap in CIFAR and Tiny-imagenet, DBA still succeeds but we find the backdoor will be forgotten faster. We suggest using non-zero trigger gap when implementing DBA. • In image datasets, larger trigger size gives higher DBA-ASR and DBA-ASR-t. Nevertheless, they are stable once TS becomes large enough, suggesting little gain in using over-sized triggers. • For MNIST, DBA-ASR is low when TS = 1. This is because each local trigger is too small to be recognized in global model. In the same setting, the centralized attack which uses the global pattern with 4 pixels also isn't very successful and its attack success rate soon decreases below 10% within 4 rounds. This reflects that under Attack A-S, backdoor attacks with too small trigger are ineffective. • The attack performance is poor when all distributed attackers submit the scaled updates at the same round (I = 0) in all datasets because the scaling effect is too strong, vastly changing the parameter in the global model and causes it to fail in main accuracy. It's also ineffective if the poison interval is too long because the early embemed triggers may be totally forgotten. • The peaks in Fig.12.(a)(b) show that there exists an optimal poison round interval for LOAN and MNIST. DBA attackers can wait until the global model converges and then embeds the next local trigger to maximize backdoor performance, which is a competitive advantage over centralized attack. • In CIFAR and Tiny-imagenet, varying the interval from 1 up to 50 does not lead to remarkable changes in DBA-ASR and DBA-ASR-t, which manifests that the local trigger effect can last long and contribute to the attack performance of global trigger. From this aspect, distributed attack is extraordinarily robust to RL and should be considered as a more serious threat. In our experiments, the training batch size is 64. As the X-axis variable (# of poisoned samples) in Fig.13 increases from 1, DBA-ASR and DBA-ASR-t first increase and then drop. It's intuitive that more poisoned data can lead to a better backdoor performance. However, a too large poison ratio means that the attacker scales up the weight of a local model of low accuracy, which leads to the failure of global model in the main task. In the case of poisoning full batch, after DBA, the global model in CIFAR and Tiny-imagenet trains the main task all over again, whose main accuracy is normal after 90 and 40 rounds, respectively. But in MNIST it is reduced to an overfitted model that predicts the target label for any input, so the attack success rate is always 100% while the main accuracy is about 10% in the subsequent rounds. Therefore, it's better for DBA to remain stealthy in its local training by using a reasonable poison ratio that also maintains accuracy on clean data. Under various data distributions, DBA-ASR is stable, indicating the practicability and robustness of DBA. See more details in Appendix. A.10. Federated Learning. first introduced federated learning (FL) to solve the distributed machine learning problem. Since the training data is never shared with the server (aggregator), FL is in favor of machine learning with privacy and regulation constraints. In this paper, we discuss and analyze our experiments in standard FL settings performed in synchronous update rounds. Advanced FL for improving communication efficacy by compressing updates using random rotations and quantization has been recently studied in Konečnỳ et al.. Backdoor Attack on Federated Learning. proposed a model-poisoning approach on FL which replaced the global model with a malicious local model by scaling up the attacker's updates. considered the case of one malicious attacker aiming to achieve both global model convergence and targeted poisoning attack, by boosting the malicious updates. They proposed two strategies, alternating minimization and estimating other benign updates, to evade the defences under weighted and non-weighted averaging for aggregation. We note that these works only consider centralized backdoor attack on FL. Robust Federated Learning. Robust FL aims to train FL models while mitigating certain attack threats. proposed a novel defense based on the party updating diversity without limitation on the number of adversarial parties. It adds up historical updating vectors and calculate the cosine similarity among all participants to assign global learning rate for each party. Similar updating vectors will obtain lower learning rates and therefore the global model can be prevented from both label-flipping and centralized backdoor attacks. proposed a robust aggregation approach by replacing the weighted arithmetic mean with an approximate geometric median, so as to minimize the impacts of "outlier" updates. Through extensive experiments on diverse datasets including LOAN and three image datasets in different settings, we show that in standard FL our proposed DBA is more persistent and effective than centralized backdoor attack: DBA achieves higher attack success rate, faster convergence and better resiliency in single-shot and multiple-shot attack scenarios. We also demonstrate that DBA is more stealthy and can successfully evade two robust FL approaches. The effectiveness of DBA is explained using feature visual interpretation for inspecting its role in aggregation. We also perform an in-depth analysis on the important factors that are unique to DBA to explore its properties and limitations. Our suggest DBA is a new and more powerful attack on FL than current backdoor attacks. Our analysis and findings can provide new threat assessment tools and novel insights for evaluating the adversarial robustness of FL. A APPENDIX The financial dataset LOAN contains the current loan status (Current, Late, Fully Paid, etc.) and latest payment information, which can be used for loan status prediction. It consists of 1,808,534 data samples and we divide them by 51 US states, each of whom represents a participant in FL. 80% of data samples are used for training and the rest is for testing. In the three image datasets, a Dirichlet distribution is used to divide training images for 100 parties. The distribution hyperparameter is 0.5 for MNIST and CIFAR and 0.01 for Tiny-imagenet. Every party uses SGD as optimizer and trains for E local epochs with local learning rate l r (see Tb.1) and a batch size of 64. A shared global model is trained by all participants, 10 of whom are selected in each round to submit locally computed SGD updates for aggregation. For the pixel-pattern backdoor, we assign white color to chosen pixels and swap the label of any sample with such triggers into the target label, which is "digit 2" in MNIST, "bird" in CIFAR and "bullfrog" in Tiny-imagenet. Except in Section 4 where we analyze the trigger factor effect, in other sections the trigger factors are set to be φ = {4, 2, 0} for MNIST; φ = {6, 3, 0} for CIFAR; φ = {10, 2, 0} for Tiny-imagenet with 4 DBA attackers. Because the image size in tiny-imagenet are larger than cifar and mnist, we set the row number of the local trigger to 2 in Tiny-imagenet while it is 1 in other image datasets. Similarly, for the preprocessed 5 LOAN dataset, six features 6 which are the low importance features in Fig.7 are chosen and split by 3 DBA attackers, each of whom manipulates two features as a local trigger. They assign local trigger features with new values 7 that are slightly larger than their maximum values, and swap label to "Does not meet the credit policy. Status:Fully Paid". Every attacker's batch is mixed with correctly labeled data and such backdoored data with poison ratio r (see Tb.1). Attackers have their own local poison l r and poison E (see Tb.1) to maximize their backdoor performance and remain stealthy. In Attack A-M, we found that if DBA poisons from scratch, the main accuracy was low and hard to converge. Therefore in three image datasets, we begin to attack when the main accuracy of global converges, which is round 10 for MNIST, 200 for CIFAR, 20 for Tiny-imagenet. As mentioned in , it's also better to attack late in Attack A-S because when the global model is converging, the updates from benign clients contain less commonly shared patterns but more individual features, which are more likely to be canceled out when aggregating and thus having less impact on the backdoor. To evaluate DBA on irregular shape triggers, we decomposed the logo'ICLR' into'I','C','L','R' as local triggers on three image datasets and we decomposed the physical pattern glasses into four parts as the examples shown in Fig. 14. The under Attack A-M are shown in Fig. 15 and Fig. 16. DBA is always more effective than centralized attack, which is similar to the of regular shape triggers in Fig. 4. This also holds for glass patterns with different colors as shown in Fig. 17. In our experiment setup we assumed that there are f distributed attackers and 1 centralized attacker. To further evaluate Attack A-S, we conduct centralized attacks with the same number of times 5 We preprocess LOAN by dropping the features which are not digital and cannot be one-hot encoded, and then normalizing the rest 91 features and the mean value of each feature is below 10. 6 num tl 120dpd 2m, num tl 90g dpd 24m, pub rec bankruptcies, pub rec, acc now delinq 7 10, 80, 20, 100, 20, 100 as DBA, but each update includes 1/f number of poisoning samples, so that the total number of poisoning samples included to compute the gradient update still stay the same. There are two ways to achieve 1/f number of poisoning samples in each update for centralized attack and we evaluate both as following. Change the poison ratio into 1/f. We decrease the fraction of backdoored samples added per training batch to 1/f. Specifically, the poison ratio for LOAN is centralized 3/64, distributed 9/64; for MNIST is centralized 5/64, distributed 20/64; for CIFAR is centralized 1/64, distributed 4/64; for Tiny-imagenet is centralized 1/64, distributed 4/64. Other parameters are the same as described in the paper. Figure 18: Scale f times with 1/f poison ratio each time (in Tb.1), the more obvious the decline in the main accuracy as the scale factor increases, because the scaling undermines more model parameters", the setting of f times scaling for centralized attack has larger impact on complex neural network like Resnet used in CIFAR and Tiny-imagenet. However, we note that this setting is not a totally fair comparison of the single-shot attack setting, as the same malicious agent of the centralized attack is allowed to attack f times, while each malicious agent of DBA only attacks once. Change the data size to 1/f. We divide the local dataset into f parts and use 1/f dataset for each update and keep the poison ratio unchanged. For CIFAR and Tiny-imagenet, we find that DBA is more effective as shown in Fig. 20. For LOAN and MNIST, both attacks don't behave well. We believe the reason can be explained by the fact that LOAN and MNIST are simpler tasks and benign clients quickly agree on the correct gradient direction, so malicious updates are more difficult to succeed. Bulyan We use Bulyan based on the Byzantineresilient aggregation rule Krum . To meet the assumption that 4f + 3 <= n, we set (n = 15, f = 3) for LOAN and (n = 20, f = 4) for image datasets. For CIFAR, DBA is more effective as shown in Fig. 21. For other datasets, both attacks fail. However, we note that our distributed and centralized backdoor attacks are not optimized for Byzantine setting. We believe its worthwhile to explore the distributed version of other new attack algorithms, e.g. that manipulates its update to mitigate Krum and Bulyan defenses. A.7 proposed Soft Decision Tree which distills a trained neural network by training with data and their soft targets that are the predictions of the neural network over classes. Trained with gradient descent, every inner node has a learned filter and a bias to make binary decision and the leaf node has a learned distribution. To some extent we can use the filter value to reflect the importance of every feature in internal nodes. We learn soft decision trees from the clean neural network and DBA poisoned neural network of LOAN and MNIST and they all achieve about 90% test accuracy on main and backdoor tasks. If we look at the third node in the forth layer in Fig.22.(b), the potential classifications are only 2 and 0, thus its filter is simply learning to distinguish these two digit. With extreme dark color in the area of the global pattern, which means these pixels correspond to small value in filter, this inner node will make leftmost branch decision into target label 2 when triggered by the global pattern because the probability is lower than 0.5. Taking an opposite example, the leftmost node in second layer has extreme white color in the area of the global pattern, which means these pixels correspond to large value of filter and will contribute to make rightmost branch decision if encountering the global pattern. Moreover, clean images won't trigger the filters in backdoor pattern area and the major digit shape in center dominates the decision route, like examples in Fig.24.(b). Comparing Fig.22.(a)(b), the trigger area after poisoning becomes much more significant for decision making. Soft Decision Tree provides insights into the neural network and give explainable classification decisions. Examples of the decision routes in inference time for clean and poisoned input data are given for MNIST in Fig.25 and in Fig.24. We find that the poisoned model already starts to misbehave from the top node of the tree. We also run 10000 poisoned and clean samples for the LOAN clean and poison models to study the sample-wise importance based on the filter value multiplied by the input feature value in Fig.23. With this local importance metric, the original low importance feature indeed becomes salient in poisoned model with poisoned input. Importance by W During this process we increases Shif t y and first keeps Shif t x = Shif t y. After the rightmost pixel reaches the right edge of images, we fix Shif t x as its largest value, which is X value of the dotted line in Fig.9, and keep increasing Shif t y until the lowest pixel reaches the button edge of the images. TL is the max value among Shif t x and Shif t y. • By increasing the hypterparameter α in the Dirichlet distribution, we can simulate from non-i.i.d to i.i.d distributions for the image datasets. When evaluated under Attack A-M, Fig.28 shows that DBA-ASR is stable under various distributions, which exhibits the practicability and robustness of DBA when attacking standard FL. • Data distribution has more influence on the DBA performance under robust aggregation algorithms when calculating distance or similarity between the benign updates and malicious updates. When the training data are non-i.i.d., the updates across the benign participants already appear high diversity so the poisoned update is better concealed among them and less likely to be detected. In our experiments, it's easier for DBA to succeed against RFA and FoolsGold under a more non-i.i.d. data distribution in CIFAR and Tiny-imagenet. In Fig. 7, the names for six low importance feature are num tl 120dpd 2m, num tl 90g dpd 24m, pub rec bankruptcies, pub rec, acc now delinq, tax liens; the names six high importance feature are out prncp,total pymnt inv, out prncp inv, total rec prncp,last pymnt amnt, all util.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkgyS0VFvr
We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods
Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning. These methods typically work by generating node representations that are propagated throughout a given weighted graph. Here we argue that for semi-supervised learning, it is more natural to consider propagating labels in the graph instead. Towards this end, we propose a differentiable neural version of the classic Label Propagation (LP) algorithm. This formulation can be used for learning edge weights, unlike other methods where weights are set heuristically. Starting from a layer implementing a single iteration of LP, we proceed by adding several important non-linear steps that significantly enhance the label-propagating mechanism. Experiments in two distinct settings demonstrate the utility of our approach. We study the problem of graph-based semi-supervised learning (SSL), where the goal is to correctly label all nodes of a graph, of which only a few are labeled. Methods for this problem are often based on assumptions regarding the relation between the graph and the predicted labels. One such assumption is smoothness, which states that adjacent nodes are likely to have similar labels. Smoothness can be encouraged by optimizing an objective where a loss term L over the labeled nodes is augmented with a quadratic penalty over edges: Here, y are the true labels, f are "soft" label predictions, S is the set of labeled nodes, and w are non-negative edge weights. The quadratic term in Eq. is often referred to as Laplacian Regularization since (for directed graphs) it can equivalently be expressed using the graph Laplacian BID5. Many early methods for SSL have adopted the general form of Eq. BID51 BID50 BID4 BID6 BID0 BID42 BID47. Algorithms such as the seminal Label Propagation BID51 are simple, efficient, and theoretically grounded but are limited in two important ways. First, predictions are parameterized either naïvely or not at all. Second, edge weights are assumed to be given as input, and in practice are often set heuristically. Recent deep learning methods address the first point by offering intricate predictive models that are trained discriminatively BID47 BID38 BID48 BID28 BID20 BID21 BID34. Nonetheless, many of them still require w as input, which may be surprising given the large body of work highlighting the importance of good weights BID51 BID24 BID46 BID4 BID25. While some methods consider some form of weight learning BID45 BID35, to some extent they have drifted away from the original quadratic criterion. Other works address the second point by proposing disciplined ways for learning w. However, these either assume specific simple parameterizations BID49 BID25, or altogether consider weights disjointly from predictions BID46 BID32.Our goal in this paper is to simultaneously addresses both issues. We propose a framework that, given a graph, jointly learns both a parametric predictive model and the edge weights. To do this, we begin by revisiting the Label Propagation (LP), and casting it as a differentiable neural network. Each layer in the network corresponds to a single iterative update, making a forward pass equivalent to a full run of the algorithm. Since the network is differentiable, we can then optimize the weights of the LP solution using gradient descent. As we show, this can be done efficiently with a suitable loss function. The key modeling point in our work is that labeled information is used as input to both the loss and the network. In contrast to most current methods, our network's hidden layers directly propagate labeling information, rather than node or feature representations. Each layer is therefore a self-map over the probability simplex; special care is therefore needed when introducing non-linearities. To this end, we introduce two novel architectural components that are explicitly designed to operate on distributions. The first is an information-gated attention mechanism, where attention is directed based on the informativeness and similarity of neighboring nodes' states. The second is a novel "bifurcation" operator that dynamically controls label convergence, and acts as a balancing factor to the model's depth. Our main guideline in designing our model was to tailor it to the semi-supervised setting. The is a slim model having relatively few parameters and only one model-specific hyper-parameter (depth), making it suitable for tasks where only few labeled nodes are available. The final network provides a powerful generalization of the original propagation algorithm that can be trained efficiently. Experiments on benchmark datasets in two distinct learning settings show that our model compares favorably against strong baselines. Many SSL methods are based on Eq. or on similar quadratic forms. These differ in their assumed input, the optimization objective, and the parametric form of predictions. Classic methods such as LP BID51 assume no parametric form for predictions, and require edge weights as inputs. When node features are available, weights are often set heuristically based on some similarity measure (e.g., w ij = exp x i − x j 2 2 /σ 2). LP constrains predictions on S to agree with their true labels. Other propagation methods relax this assumption BID4 BID6, add regularization terms BID0, or use other Laplacian forms BID50.Some methods aim to learn edge weights, but do not directly optimize for accuracy. Instead, they either model the relations between the graph and features BID46 BID32 or simply require f as input BID14 BID23 BID17. Methods that focus on accuracy are often constrained to specific parameterizations or assumptions BID25. BID49 optimize the leave-one-out loss (as we do), but require a series of costly matrix inversions. Several recent works in deep learning have been focused on graph inputs in general BID2 and specifically for inductive SSL. The main idea behind these methods is to utilize a weighted graph to create meaningful vector representations of nodes, which are then fed into a classifier. Methods are typically designed for one of two settings: when the input includes only a graph, and when node features are available. When the input includes only a graph (and no features), node representations are generated using embedding techniques. BID38 use a SkipGram model over random walks on the graph, which are used to define context. BID20 further this idea by introducing expressive parameterized random walks, while BID43 focus on optimizing similarities between pairs of node embeddings. Various methods have been proposed to utilize node features, when available. Spectral methods, stemming from a CNN formulation for graphs BID11, include different approximations of spectral graph convolutions BID16 BID28 adaptive convolution filters BID34, or attention mechanisms BID45 Embedding approaches have been suggesting for handling bag-of-words representations BID48 and general node attributes such as text or continuous features BID18. Many of the above methods can be thought of as propagating features over the graph in various forms. Our method, in contrast, propagates labels. The main advantage label propagation is that labeled information is used not only to penalize predictions (in the loss), but also to generate predictions. We begin by describing the learning setup and introducing notation. The input includes a (possibly directed) graph G = (V, E), for which a subset of nodes S ⊂ V are labeled by y S = {y i} i∈S with y i ∈ {1, . . ., C}. We refer to S as the "seed" set, and denote the unlabeled nodes by U = V \ S, and the set of i's (incoming) neighbors by N i = {j : (j, i) ∈ E}. We use n = |V |, m = |E|, = |S|, and u = |U | so that n = + u. In a typical task, we expect to be much smaller than n. We focus on the transductive setting where the goal is to predict the labels of all i ∈ U. Most methods (as well as ours) output "soft" labels f i ∈ ∆ C, where ∆ C is the C-dimensional probability simplex. For convenience we treat "hard" labels y i as one-hot vectors in ∆ C. All predictions are encoded as a matrix f with entries f ic = P[y i = c]. For any matrix M, we will use M A to denote the sub-matrix with rows corresponding to A. Under this notation, given G, S, y S, and possibly x, our goal is to predict soft labels f U that match y U.In some cases, the input may also include features for all nodes x = {x i} i∈V. Importantly, however, we do not assume the input includes edge weights w = {w e} e∈E, nor do we construct these from x. We denote by W the weighted adjacency matrix of w, and useW andw for the respective (row)-normalized weights. Many semi-supervised methods are based on the notion that predictions should be smooth across edges. A popular way to encourage such smoothness is to optimize a (weighted) quadratic objective. Intuitively, the objective encourages the predictions of all adjacent nodes to be similar. There are many variations on this idea; here we adopt the formulation of BID51 where predictions are set to minimize a quadratic term subject to an agreement constraint on the labeled nodes: DISPLAYFORM0 In typical applications, w is assumed to be given as input. In contrast, our goal here is to learn them in a discriminative manner. A naïve approach would be to directly minimize the empirical loss. For a loss function L, regularization term R, and regularization constant λ, the objective would be: DISPLAYFORM1 While appealing, this approach introduces two main difficulties. First, f * is in itself the solution to an optimization problem (Eq.), and so optimizing Eq. FORMULA1 is not straightforward. Second, the constraints in Eq. ensure that fIn what follows, we describe how to overcome these issues. We begin by showing that a simple algorithm for approximating f * can be cast as a deep neural network. Under this view, the weights (as well as the algorithm itself) can be parametrized and optimized using gradient descent. We then propose a loss function suited to SSL, and show how the above network can be trained efficiently with it. Recall that we would like to learn f * (w; S). When w is symmetric, the objective in Eq. FORMULA10 is convex and has a closed form solution. This solution, however, requires the inversion of a large matrix, which can be costly, does not preserve sparsity, and is non-trivial to optimize. The LP algorithm BID51 circumvents this issue by approximating f * using simple iterative averaging updates. Let f (t) be the set of soft labels at iteration t andw ij = w ij / k w ik, then for the following recursive relation: DISPLAYFORM0 it holds that lim t→∞ f (t) = f * for any initial f BID51. In practice, the iterative algorithm is run up to some iteration T, and predictions are given using f (T). This dynamic process can be thought of as labels propagating over the graph from labeled to unlabeled nodes over time. Motivated by the above, the idea behind our method is to directly learn weights for f (T), rather than for f *. In other words, instead of optimizing the quadratic solution, our goal is to learn weights under which LP preforms well. This is achieved by first designing a neural architecture whose layers correspond to an "unrolling" of the iterative updates in Eq., which we describe next. The main building block of our model is the basic label-propagation layer, which takes in two main inputs: a set of (predicted) soft labels h = {h i} n i=1 for all nodes, and the set of true labels y A for some A ⊆ S. For clarity we use A = S throughout this section. As output, the layer produces a new set of soft labels h = {h i} n i=1 for all nodes. Note that both h i and h i are in ∆ C. The layer's functional form borrows from the LP update rule in Eq. where unlabeled nodes are assigned the weighted-average values of their neighbors, and labeled nodes are fixed to their true labels. For a given w, the output is: DISPLAYFORM0 whereW is the row-normalized matrix of w. A basic network is obtained by composing T identical layers: DISPLAYFORM1 where the model's parameters w are shared across layers, and the depth T is the model's only hyper-parameter. The input layer h is initialized to y i for each i ∈ S and to some prior ρ i (e.g., uniform) for each i ∈ U. Since each layer h (t) acts as a single iterative update, a forward pass unrolls the full algorithm, and hence H can be thought of as a parametrized and differentiable form of the LP algorithm. In practice, rather than directly parameterizing H by w, it may be useful to use more sophisticated forms of parameterization. We will denote such general networks by H(θ), where θ are learned parameters. As a first step, given edge features {φ e} e∈E, we can further parametrize w using linear scores s ij and normalizing with softmax: DISPLAYFORM2 where θ φ ∈ R d are learned parameters. We propose using three types of features (detailed in Appendix B): Node-feature similarities: when available, node features can be used to define edge features by incorporating various similarity measures such as cosine similarity (φ ij = x i x j / x i x j) and Gaussian similarity DISPLAYFORM3 DISPLAYFORM4 where each similarity measure induces a distinct feature. Graph measures: these include graph properties such as node attributes (e.g., source-node degree), edge centrality measures (e.g., edge betweenness), path-ensemble features (e.g., Katz distance), and graph-partitions (e.g., k-cores). These allow generalization across nodes based on local and global edge properties. Seed relations: relations between the edge (i, j) and nodes in S, such the minimal (unweighted) distance to i from some s ∈ S. Since closer nodes are more likely to be labeled correctly, these features are used to quantify the reliability of nodes as sources of information, and can be class-specific. The label propagation layers in H pass distributions rather than node feature representations. It is important to take this into account when adding non-linearities. We therefore introduce two novel components that are explicitly designed to handle distributions, and can be used to generalize the basic layer in Eq. The general layer (illustrated in FIG1) replaces weights and inputs with functions of the previous layer's output: DISPLAYFORM0 whereÃ(·) is a normalized weight matrix (replacingW), µ(·) is a soft-label matrix (replacing h (t) ), and θ α and θ τ are corresponding learned parameters. The edge-weight functionà offers an information-gated attention mechanism that dynamically allocates weights according to the "states" of a node and its neighbors. The labeling function µ is a time-dependent bifurcation mechanism which controls the rate of label convergence. We next describe our choice ofà and µ in detail. The LP update (Eq.) uses fixed weights w. The importance of a neighbor j is hence predetermined, and is the same regardless of, for instance, whether h j is close to some y, or close to uniform. Here we propose to relax this constraint and allow weights to change over time. Thinking of h (t) i as the state of i at time t, we replace w ij with dynamic weights a (t) ij that depend on the states of i and j through an attention mechanism α: DISPLAYFORM0 where θ α are the attention parameters. à in Eq. FORMULA8 is the corresponding row-normalized weight matrix. determined by e and d (boxed bars), which are computed using h (t) and θ. Here θ directs attention at the informative and similar neighbor (thick arrow), and the update amplifies the value of the correct label. (Right) The bifurcation mechanism for C = 3 and various τ. Arrows map each h ∈ ∆ C to bif(h) ∈ ∆ C. DISPLAYFORM1 When designing α, one should take into account the nature of its inputs. Since both h i and h j are label distributions, we have found it useful to let α depend on information theoretic measures and relations. We use negative entropy e to quantify the certainty of a label, and negative KL-divergence d to measure cross-label similarity. Both are parameterized by respective class-dependent weights θ e c and θ d c, which are learned: DISPLAYFORM2 where: DISPLAYFORM3 In a typical setting, unlabeled nodes start out with uniform labels, making the overall entropy high. As distributions pass through the layers, labeled information propagates, and both entropy and divergence change. The attention of node i is then directed according to the informativeness (entropy) and similarity (divergence) of the states of its neighbors. As we show in the experiments (Sec. 5), this is especially useful when the data does not include node features (from which weights are typically derived). FIG2 (left) exemplifies this. Although the updates in Eq. converge for any w, this can be slow. Even with many updates, predictions are often close to uniform and thus sensitive to noise BID39. One effective solution is to dynamically bootstrap confident predictions as hard labels BID29 BID19. This process speeds up the rate of convergence by decreasing the entropy of low-entropy labels. Here we generalize this idea, and propose a flexible bifurcation mechanism. This mechanism allows for dynamically increasing or decreasing the entropy of labels. For node i and some τ ∈ R, h ic is replaced with: DISPLAYFORM0 Note that since h i ∈ ∆ C, this definition ensures that, for any τ, we have that µ(h i) ∈ ∆ C as well. When τ > 1 and as τ increases, entropy decreases, and confident labels are amplified. In contrast, when 0 < τ < 1 and as approaches 0, entropy decreases, and labels become uniform. For τ < 0 the effects are reversed, and setting τ = 1 gives µ(h i) = h i, showing that Eq. FORMULA8 DISPLAYFORM1 Thus, when θ τ = 0, Eq. FORMULA1 Recall that our goal is to learn the parameters θ of the network H(θ; S). Note that by Eq., for all i ∈ S it holds that H i (θ; S) = y i. In other words, as in LP, predictions for all labeled nodes are constrained to their true value. Due to this, the standard empirical loss becomes degenerate, as it penalizes H i (θ; S) only according to y i, and the loss becomes zero for all i ∈ S and for any choice of θ. As an alternative, we propose to follow BID49 and minimize the leave-one-out loss: DISPLAYFORM0 where S −i = S \ {i}, L is a loss function, R is a regularization term with coefficient λ, and θ contains all model parameters (such as θ φ, θ α, and θ τ). Here, each true label y i is compared to the model's prediction given all labeled points except i. Thus, the model is encouraged to propagate the labels of all nodes but one in a way which is consistent with the held-out node. In practice we have found it useful to weight examples in the loss by the inverse class ratio (estimated on S).The leave-one-out loss is a well-studied un-biased estimator of the expected loss with strong generalization guarantees BID26 BID8 BID22. In general settings, training the model on all sets {S −i} i∈S introduces a significant computational overhead. However, in SSL, when is sufficiently small, this becomes feasible BID49. For larger values of, a possible solution is to instead minimize the leave-k-out loss, using any number of sets with k randomly removed examples. When λ is small, θ is unconstrained, and the model can easily overfit. Intuitively, this can happen when only a small subset of edges is sufficient for correctly propagating labels within S. This should in noisy labels for all nodes in U. In the other extreme, when λ is large, w approaches 0, and by Eq. w is uniform. The current graph-SSL literature includes two distinct evaluation settings: one where the input includes a graph and node features, and one where a graph is available but features are not. We evaluate our method in both settings, which we refer to as the "features setting" and "no-features setting", respectively. We use benchmark datasets that include real networked data (citation networks, social networks, product networks, etc.). Our evaluation scheme follows the standard SSL setup 1 BID51 BID50 BID12 BID49 BID38 BID20 BID43 BID34. First, we sample k labeled nodes uniformly at random, and ensure at least one per class. Each method then uses the input (graph, labeled set, and features when available) to generate soft labels for all remaining nodes. Hard labels are set using argmax. We repeat this for 10 random splits using k = 1% labeled nodes. For further details please see Appendices A and C.For each experimental setting we use different Label Propagation Network (LPN) variant that differ in how edge weights are determined. Both variants use bifurcation with linear time-dependency (Sec. 3.2), and include a-symmetric bi-directional edge weights. In all tasks, LPN was initialized to simulate vanilla LP with uniform weights by setting θ = 0. Hence, we expect the learned model deviate from LP (by utilizing edge features, attention, or bifurcation) only if this in more accurate predictions. We choose T ∈ {10, 20, . . ., 100} by running cross-validation on LP rather than LPN. This process does not require learning and so is extremely fast, and due to bifurcation, quite robust (see FIG5). For training we use a class-balanced cross-entropy loss with 2 regularization, and set λ by 5-fold cross-validation. We optimize with Adam using a learning rate of 0.01. We use all relevant datasets from the LINQS collection BID41. These include three citation graphs, where nodes are papers, edges link citing papers, and features are bag-of-words. As described in Sec. 2.2, for this setting we use a model (LPN φ) where w is parameterized using a linear function of roughly 30 edge features (φ). These are based on the given "raw" node features, the graph, and the labeled set. See Appendix B for more details. Baselines include LP BID51 with uniform (LP U) and RBF (LP RBF) weights, the LP variant ADSORPTION BID0, ICA BID33, Graph Convolutional Networks , and Graph Attention Networks (GAT, BID45). 2 We also add a features-only baseline (RIDGEREG) and a graph-only baseline (NODE2VEC). We use the FLIP collection BID40, which includes several types of real networks. As no features are available, for generating meaningful weights we equip our model (LPN α) with the attention mechanism (Sec. 3.1), letting weights vary according to node states. Baselines include LP with uniform weights (LP U), the spectral embedding LEM BID3, and the deep embedding DEEPWALK BID38, LINE BID43, and NODE2VEC BID20.Results: TAB4 includes accuracies for both features and no-features settings, each averaged over 10 random splits. As can be seen, LPN outperforms other baselines on most datasetes, and consistently ranks high. Since LPN generalizes LP, the comparison to LP U and LP RBF quantifies the gain achieved by learning weights (as opposed to setting them heuristically). When weights are parameterized using features, accuracy improves by 13.4% on average. When attention is used, accuracy improves by a similar 13.7%.While some deep methods perform well on some datasets, they fail on others, and their overall performance is volatile. This is true for both learning settings. A possible explanation is that, due to their large number of parameters and hyper-parameters, they require more labeled data. Deep methods tend to perform well when more labeled nodes are available, and when tuning is done on a large validation set, or even on an entire dataset (see, e.g., BID48 BID28 ; BID34 BID45). In contrast, LPN requires relatively few parameters (θ) and only a singe model-specific hyper-parameter (T). Analysis: FIG4 gives some insight as to why LPN learns good weights. It is well known that the Laplacian's eigenvalues (and specifically λ 2, the second smallest one) play an important role in the generalization of spectral methods BID4. The figure shows how λ 2 and accuracy change over the training process. As can be seen, learning leads to weights with increasing λ 2, followed by an increase in accuracy. FIG5 demonstrates the effect of bifurcation for different depths T. As can be seen, a model with bifurcation (LPN bif) clearly outperforms the same model without it (LPN nobif). While adding depth generally improves LPN bif, it is quite robust across T. This is mediated by larger values of τ that increase label convergence rate for smaller T. Interestingly, LPN nobif degrades with large T, and even τ slightly above 1 makes a difference. In this work we presented a deep network for graph-based SSL. Our design process revolved around two main ideas: that edge weights should be learned, and that labeled data should be propagated. We began by revisiting the classic LP algorithm, whose simple structure allowed us to encode it as a differentiable neural network. We then proposed two novel ad-hoc components: information-gated attention and bifurcation, and kept our design slim and lightly parameterized. The ing model is a powerful generalization of the original algorithm, that can be trained efficiently using the leave-one-out loss using few labeled nodes. We point out two avenues for future work. First, despite its non-linearities, the current network still employs the same simple averaging updates that LP does. An interesting challenge is to design general parametric update schemes, that can perhaps be learned. Second, since the Laplacian's eigenvalues play an important role in both theory and in practice, an interesting question is whether these can be used as the basis for an explicit form of regularization. We leave this for future work. Dataset statistics are summarized in TAB1. As described in Sec. 5, there are two collections of data, LINQS 3 BID41 and FLIP 4 BID40 Although possible, parameterizing H directly by w will likely lead to overfitting. Instead, we set edge weights to be a function of edge features φ ij ∈ R d and parameters θ φ ∈ R d, and normalize using softmax over scores: DISPLAYFORM0 Our main guideline in choosing features is to keep in line with the typical SSL settings where there are only few labeled nodes. To this end, we use only a handful of features, thus keeping the number of model parameters to a minimum. We propose three types of features suited for different settings. Most works consider only "raw" node features (e.g., bag-of-words for papers in a citation network). The model, however, requires edge features for parameterizing edge weights. Edge features are therefore implicitly constructed from node features, typically by considering node-pair similarities in features space. This has three limitations. First, node feature spaces tend to be large and can thus lead to over-parameterization and eventually overfitting. Second, edge features are inherently local, as they are based only on the features of corresponding nodes, and global graph-dependent properties of edges are not taken into account. Third, parameterization is completely independent of the labeled set, meaning that edges are treated similarly regardless of whether they are in an informative region of the graph (e.g., close to labeled nodes) or not (e.g., far from any labeled node).In accordance, we propose three types of features that overcome these issues by leveraging raw features, the graph, and the labeled "seed" set. Raw features (φ x): When the data includes node features x i, a simple idea is to use a small set of uncorrelated (unparameterized) similarity measures. Examples include feature similarity measures such as cosine (x i x j / x i x j) or Gaussian (exp{− x i − x j 2 2 /σ 2}) and the top components of dimensionality reduction methods. Graph features (φ G): When the graph is real (e.g., a social network), local attributes and global roles of nodes are likely to be informative features. These can include node attributes (e.g., degree), centrality measures (e.g., edge betweenness), path-ensembles (e.g., Katz distance), and graph-partitions (e.g., k-cores). These have been successfully used in other predictive tasks on networked data (e.g., BID13).Seed features (φ S): Since labels propagate over the graph, nodes that are close to the labeled set typically have predictions that are both accurate and confident. One way of utilizing this is to associate an incoming edge with the lengths of paths that originate in a labeled node and include it. This acts as a proxy for the reliability of a neighbor as a source of label information. In general, features should be used if they lead to good generalization; in our case, this depends on the available data (such as node features), the type of graph (e.g., real network vs. k-NN graph), and by the layout of the labeled set (e.g., randomly sampled vs. crawled). TAB4 provides a list of some useful features of each of the above types. These were used in our experiments. BID30 |Γ(u) ∩ Γ(v)|/|Γ(u) ∪ Γ(v)| Graph Link Prediction Edge Adamic Adar Index BID30 w∈Γ FORMULA10 x, y denote feature vectors of two different nodes. Γ(u) is the set of neighbors of u. σ(s, t) is the number of shortest paths from s to t and σ(s, t|e) is those that pass through e.• DEEPWALK: used source code provided by the authors.• NODE2VEC: used source code provided by the authors.• LINE: used source code provided by the authors.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1g7y2RqYX
Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations
Neural architecture search (NAS) has made rapid progress incomputervision,wherebynewstate-of-the-arthave beenachievedinaseriesoftaskswithautomaticallysearched neural network (NN) architectures. In contrast, NAS has not made comparable advances in natural language understanding (NLU). Corresponding to encoder-aggregator meta architecture of typical neural networks models for NLU tasks (Gong et al. 2018), we re-define the search space, by splittingitinto twoparts:encodersearchspace,andaggregator search space. Encoder search space contains basic operations such as convolutions, RNNs, multi-head attention and its sparse variants, star-transformers. Dynamic routing is included in the aggregator search space, along with max (avg) pooling and self-attention pooling. Our search algorithm is then fulfilled via DARTS, a differentiable neural architecture search framework. We progressively reduce the search space every few epochs, which further reduces the search time and resource costs. Experiments on five benchmark data-sets show that, the new neural networks we generate can achieve performances comparable to the state-of-the-art models that does not involve language model pre-training. Neural architecture search (NAS) has recently attracted intensive attention. On one hand, promising methodological innovation for NAS have been developed, e.g. the seminal gradient-based NAS approach DARTS (Liu, Simonyan, and Yang 2018), followed by improvements such as SNAS (Xie et al. 2018), P-DARTS, PC-DARTS (Xu et al. 2019), etc. On the other hand, NAS has helped to discover better models to for a variety of vision tasks, e.g., image classification (Zoph and Le 2017; Zoph et al. 2017; Cai, Zhu, and Han 2018), semantic segmentation, object detection (Ghiasi, Lin, and Le 2019), superresolution (Ahn, Kang, and Sohn 2018), etc. For natural language processing tasks, NAS is relatively less studied. Except for the general methodology-wise innovations NASNet (Zoph and Le 2016), ENAS (Pham et al. 2018) and DARTS (Liu, Simonyan, and Yang 2018) which pay slight extra effort on searching for new RNN cells on Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. language modeling (LM) tasks, there is little studies tailored to the NLU task. One such an example is the evolved transformer (So, Liang, and Le 2019), which uses the evolutionbased NAS algorithm to search for better transformer architecture for machine translation. Although state-of-the-art performance has been achieved on 4 machine translation tasks, the computation cost is exceedingly high since they have to evaluate a large number of models. In fact, NAS has not been fully investigated for a wide variety of fundamental natural language understanding (NLU) tasks, such as classification (e.g. or sentiment analysis), natural language inference (NLI), sequence tagging tasks such as named entity recognition (NER). Especially, there is no existing work on the effectiveness of one-shot architecture search (Bender et al. 2018) methods on NLU tasks, which could also otherwise significantly reduce the search cost as done in vision tasks. A typical neural network architecture for NLU includes an encoder which contextualizes the embedded text inputs and extracts higher-level features, and an aggregator that aggregates the encoded inputs to a fix-length vector to make a prediction (Gong et al. 2018). In terms of encoders, many previous NAS literature restrict the search space to nonlinear maps such as tanh and sigmoid, and the objective to be the discovery of a new recurrent cell to form a new type of recurrent neural network (RNN). However, other than RNNs, there are many other available encoders, for example, convolutional networks (CNN) (Kim 2014), and attentionbased model such as transformer (Vaswani et al. 2017), etc. In addition, recent works e.g. star-transformer (Guo et al. 2019) have proposed more sparse versions of transformer to reduce the computational complexity and improve the generalization when there is no pre-trained language model. In addition, as far as we know, there is no existing work on searching for an aggregator. A collection of aggregators are available (Gong et al. 2018). However, one have to choose manually in a trial-and-error fashion. In this work, we design an encoder search space that contains a rich collection of encoders. The involved operations include: i) the zero map and identity map; ii) the two most commonly used RNNs, LSTM (Hochreiter and Schmidhuber 1997) and GRU (Cho et al. 2014); iii) highway network (Srivastava, Greff, and Schmidhuber 2015); iv) a series of convolutional networks with different kernel sizes; v) multi-head attention from (Vaswani et al. 2017); vi) startransformer (Guo et al. 2019) and its variants, which will be explained later in the next section. The combination of encoder operations is searched in a encoder search cell, which is a directed acyclic graph (DAG) of intermediate nodes collected by the encoder operations from the encoder search space. To further reduce the human designs, we propose to search for a suitable aggregator along with the search of encoder cell via an aggregator search cell which includes max (average) pooling, self-attention pooling and dynamic routing (Gong et al. 2018). The aggregator search cell is a DAG with only one step in which the only node is connected to the inputs by a mixture of aggregators. Our search strategy is mainly based on DARTS (Liu, Simonyan, and Yang 2018). To reduce computation cost, we employ a progressive search space reduction strategy similar to P-DARTS. Experiments are performed on three different kinds of NLU tasks, i.e., text classification, NLI and NER, with 5 benchmark datasets. For fair comparison, we only compare our with former state-of-the-art (SOTA) models without large-scale LM pre-training, or any other outside resources like knowledge bases, or any human designed features. Results have shown that with the help of NAS on our search space, we achieve that are comparable to the SOTA on these 5 tasks, indicating the effectiveness of NAS in the field of NLU research. Our work contributes the field by the following aspects: • We re-define the search space for neural architecture search in NLU tasks, by extending and modifying the encoder search space from the evolved transformer, and define the aggregator search space. • To the best of our knowledge, we are the first to conduct NAS experiments on NLU tasks such as classification, NLI, NER tasks, with one-shot NAS. • Our approach achieves the that are comparable to the state-of-the-art models designed by human experts, on various NLU tasks (classification, NLI, NER), by using neural architecture search over the search space defined above. In addition, we demonstrate the effectiveness of one-shot architecture search for NLU tasks. • We propose a modularized version of star-transformer and its variant, thus including a sparse version of transformer into the search space, which is also novel in the literature. The ing advantage is that the search cost can be reduced notably and the network's generalization capability can also be improved. Related Work Recently, a new research field named neural architecture search (NAS) has been drawing more and more attention. The goal is to find automatic mechanisms for generating new neural architectures to replace conventional handcrafted ones. Recently, it is widely applied to computer vision tasks, such as image classification (Zoph and Le 2017; Zoph et al. 2017; Cai, Zhu, and Han 2018), semantic segmentation, object detection (Ghiasi, Lin, and Le 2019), super-resolution (Ahn, Kang, and Sohn 2018), etc. However, NAS is less well studied in the field of natural language understanding (NLU). Recent works (Zoph and Le 2016; Pham et al. 2018; Liu, Simonyan, and Yang 2018) search new recurrent cells for the language modeling (LM) task on the PennTreebank dataset 1. The recurrent cell discovered by (Liu, Simonyan, and Yang 2018) achieves the test perplexity of 56.1, which is competitive with the stateof-the-art model enhanced by a mixture of softmaxes. The evolved transformer (So, Liang, and Le 2019) applies NAS to discover better versions of the transformer architecture. Eploying an evolution-based search algorithm, and the vanilla transformer as the initial population, it generates a better transformer architecture that consistently outperform the vanilla transformer on 4 benchmark machine translation tasks. Our work contributes by going beyond the RNN structure and re-defining the search space to include a richer connection of operations. Our work is implemented on DARTS (Liu, Simonyan, and Yang 2018) and P-DARTS. DARTS relaxes the search space to be continuous, so that the architecture can be optimized with respect to its validation set performance by gradient descent. Due to its simplicity, DARTS has inspired a series follow-up work to improve the search stability and efficiency. Based on DARTS, P-DARTS ) divides the search process into multiple stages and progressively increase the network depth at the end of each stage. Our work contributes to the gradient-based NAS (and more generally, one-shot NAS) research by investigating its effectiveness in discovering new NN architectures for a series of NLU tasks. Our search space design takes advantages of the recent advances in the NLU field. One of the most import advances in sentence encoding is the application of various self-attention mechanisms, among which the transformer (Vaswani et al. 2017) is the most prominent one, which has become ubiquitous in NLU research. Specifically, the QANet ) modifies the transformer architecture to obtain the first place on the SQuaD leaderboard 2. The transformer is powerful due to its multi-head self-attention mechanism, which can well capture the contextual information. However, the transformer maybe be difficult to train and generalize well on a small or medium sized data-set (Guo et al. 2019). Thus, many other self-attention operations are proposed, e.g., dynamic self-attention (Yoon, Lee, and Lee 2018) and DiSAN (Shen et al. 2018). Recently, (Guo et al. 2019) propose the star-transformer, a sparser version of the multi-head attention model, and achieves competitive on a series of benchmark datasets like SST-1, SNLI, CoNLL2003. On the aggregation side, an important advancement is the application of capsule networks and dynamic routing policy in text classification Gong et al. 2018). Capsule networks can dynamically decide what and how much information need to be transferred from each word to the final encoding of the text sequence, thus achieving better even with simple encoders (Gong et al. 2018). Our work is built upon these work and contributes by: i) include some of the most prominent attention based encoders and aggregators into the search space, and experiment on whether NAS can generate new architectures that have competitive ; ii) we are the first to propose the aggregator search space; iii) we include a modularized version of the star-transformer and its variant into the search space, thus we are the first to combine the dense and sparse multi-head self-attention operations into the same search space. We first analyze the meta architectures for a series of NLU tasks, based on which we will define the search space As pointed out by (Gong et al. 2018), an NLP model with text inputs and discrete labels (possibly a sequence of labels) can be assembled by the following components: an embedding layer, an encoding layer, an aggregation layer and a prediction layer. Embedding layers usually are static pretrained embeddings like word2vec (Mikolov et al. 2013), or contextualized embedding like ELMO (Peters et al. 2018) and BERT (Devlin et al. 2018). We mainly focus on the encoding layer and aggregation layer. A text sequence with words S = w 0, w 1,..., w L−1 is mapped into a d-dimensional embedding vector space as X = x 0, x 1,..., x L−1. The encoding layer integrates the information inside the embedding layer and extract higherlevel features. The encoded inputs are denoted as H = h 0, h 1,..., h L−1. When the task at hand requires predicting a single label, the final prediction layer requires a fix-length vector. Thus an aggregator is needed to aggregate the information inside sequences of various length to a single fixlength vector, namely h *. In this work, we investigate the neural architecture search for three different tasks: classification (CLS), natural language inference (NLI) and named entity recognition (NER). The meta architectures of neural networks are depicted in Figure 1. For classification, the encoder is followed by an aggregator whose output will be passed to the prediction layer. For the NLI task, the encoding and aggregating for the premise and the hypothesis are the same as CLS task, and the aggregated vectors, h * 1, h * 2, will interact via an interaction layer before being passed into the prediction layer. Following (Chen et al. 2016), we define the interaction layer Note that in this work, we will not consider any sort of cross attentions between the two inputs before or after the interaction layer. In addition, due to limited resources for search, we restrict that the encoder and aggregator are shared by both inputs. For the NER task, the aggregator is not required. Note that in this work, we will not consider adding a CRF layer after the enocoder, as done by some other NER models e.g. (Lample et al. 2016). Recall our goal here is to discover and evaluate new model architectures. Based on the above discussion, we propose to divide the search space into two subspace: encoder search space and aggregator search space. In (Liu, Simonyan, and Yang 2018) and (Pham et al. 2018), the objective is to discover new variants of RNNs, so their search space are a collection of linear or non-linear maps, such as, tanh and sigmoid. In this work, we will define the encoder space at a more coarse granularity, allowing us to build a richer search space. As (Liu, Simonyan, and Yang 2018), the encoder search space contains the zero map and the identity map. We then include 1-d convolutional networks (conv1d), and two of RNNs, namely, LSTM and GRU, and the highway network. Highway network is introduced to help train deeper neural network models (see e.g., yu2018qanet). These basic models have been widely used in NLP tasks, such as classification (Kim 2014), Question-answering, NER (Lample et al. 2016), relation extraction (Zeng et al. 2015), to name just a few. Note that we will use the depth-wise separable convolutions (Chollet 2017) instead of the vanilla convolutions, since the former are more parameter efficient. Recent years has witnessed the architecture of Transformers (Vaswani et al. 2017) becoming ubiquitous in research community. At its core, the multi-head self-attention mechanism has shown its superior ability to model long-distance contexts. In this work, similar to (So, Liang, and Le 2019), we include the multi-head attention layer, excluding the residual connection that is usually added in a transformer block, into the search space. The point-wise feed-forward layer contains conv1d and residual connection, so we will not include it as a basic operation. Although transformer has shown its great expressiveness capability, it is difficult to train and easy to over-fit on a small or medium dataset (Guo et al. 2019). One reason is that the Algorithm 2: Reversed star-transformer module Input: multi-head attention let each token pay attention to every token in the sentence, ing in over-parametrization. Thus some more sparse variants are proposed e.g. sparse transformers (Child et al. 2019). We include the star-transformer proposed by (Guo et al. 2019). The original design of star-transformers requires the relay node being initialized with average pooling and then updated iteratively in a few layers that follows. For better modularization, we modify the Algorithm 1 of (Guo et al. 2019) to Algorithm 3 below. Note that without iterative updating, the relay node is initialized by simple averaging, during which the information of the sentence may not be well preserved. Thus, we propose to first enrich the semantic representation of relay node via multi-head attention between the initialized relay node and the inputs, before we update the satellite nodes. This variant is called the reversed star-transformer, and the algorithm is described in Algorithm 3. Due to limited resources, for attention based operations, we will only set the attention head number to be 2. Now we formally define the encoder search space, which consists of the following operations: • Special zero operation, denoted as null; • Skip connection, denoted as identity; • Highway network, denoted as highway; • 1-d separable convolutions, with kernel size k, where k = 1, 3, 5, denoted as sep conv1d 1, sep conv1d 3 and sep conv1d 5; • Two RNNs, which are denoted as lstm and gru; • Attention-based operations, with attention head number set to be 2, including: multi-head attention (multi head attn 2), star-transformer (star trans 2) and reversed star-transformer (reversed star trans 2). There are several different aggregation operations. The most common two are the max pooling and the average pooling. Self-attention technique is also used for aggregation. It assigns each word a weight to indicate the importance of a word depending on the task on hand. A few words that are crucial to the task will be emphasized while the "boring" words are ignored. We also include dynamic routing (Gong et al. 2018) into our aggregator operation space. Unless specified, we use a dynamic routing aggregator with 4 capsules and 3 iteration steps. Now we formally define the aggregator search space, which includes the following modules: • Max pooling, denoted as max-pool; • Average pooling, denoted as avg-pool; • Self-attention pooling, denoted as self-attn-pool; • Dynamic routing, denoted as dynamic-routing. In this work we use Differentiable Architecture Search (DARTS) (Liu, Simonyan, and Yang 2018) as our architecture search framework. The goal is to search for an encoder cell alongside with an aggregator cell. Employing the terminology in (Liu, Simonyan, and Yang 2018), a cell is defined as a directed acyclic graph (DAG) of N nodes, x 0, x 1, · · ·, x N −1, where each node is a network layer, i.e., performing a specific mathematical function. We denote the search space as Φ i,j, in which each element represents the information flow connecting node i to node j, which consists of a set of operations weighted by the architecture parameters α (i,j), and is thus formulated as: where i < j. An intermediate node can be represented as Given multiplier m, a hyperparameter determining the number of nodes whose are included as the cell's output, the output of the cell is The NLU tasks we are dealing with are small or medium sized, so we will not consider stacking multiple encoder cells in this paper. Our search space is large, thus we employ a progressive search space reduction strategy during search, which is similar to P-DARTS ). The procedure is as follows: i) start the search with the whole operation space; ii) let k denote the epoch interval length for a reduction. after every k epochs, we drop the operation in Φ i,j having the lowest score; iii) after each reduction, the search re-starts; iv) repeat step ii) and step iii) till the order of encoder search space drops to 5, and the order of the aggregator search space drops to 1; v) the search procedure continues with the remaining search space till convergence. After obtaining the continuous architecture encoding α, deriving the aggregator is simple, since we only need to select the most likely aggregation operation. For the encoder cell, we consider two approaches to derive the final discrete network architecture: • Following (Liu, Simonyan, and Yang 2018), retain up to k strongest predecessors for each intermediate node, where k = 1, 2. Note this approach ignores the appearance of the null operation if it obtains the highest score. • Directly replace every mixed operation as the most likely operation by taking the arg-max. If the best operation connecting two intermediate nodes is a null operation, this connection is dropped. In this way, we may find a sparser new encoder cell. In our experiments, for each search or evaluation, we assign 2 CPU cores, 20G memory and 1 Tesla P100 GPU card. We conduct experiments on three different kinds of tasks with 5 benchmark datasets, whose statistics are shown in Table 1. Specifically, SST-1 and SST-2 (Socher et al. 2013) are two text classification data-sets. Sci-tail and MedNLI are NLI datasets, and CoNLL2003 (Sang and De Meulder 2003) is a benchmark NER datasets 3. SST-1 Stanford Sentiment Treebank is a movie review dataset which has been parsed and further splitted to train/dev/test set (Socher et al. 2013). SST-2 This dataset is a binary-class version of SST-1, with neutral reviews removed. SciTail This is a textual entailment dataset derived from a science question answering (SciQ) dataset. MedNLI It is a NLI dataset annotated by doctors, grounded in the medical history of patients (Romanov and Shivade 2018). CoNLL2003 This dataset consists of 200k training words which have been annotated as Person, Organization, Location, Miscellaneous, or Other (non-named entity). Our experimental protocols follow (Liu, Simonyan, and Yang 2018). Experiments on each task consist of two stages, architecture search and architecture evaluation. In the search stage, we search for the cells on the train set, and determine the best pair of encoder cell and aggregator cell based on the performance on validation set. In the second stage, we derive discrete architectures from the cells, train them from scratch, and report their performance on the test set. Initially the search space consists of all the operations in Section Search Space Design. For every 3 epochs we carry out search space reduction once, till the search space is halved. The number of intermediate nodes N for the encoder cell ranges from 1 to 3. With N equal to 3, the search cell takes up 9G GPU memory. Note that the number of nodes for the aggregator cell can only be 1. Similar with ENAS (Pham et al. 2018) and DARTS (Liu, Simonyan, and Yang 2018), we enable layer normalization in each node to prevent gradient explosion during architecture search, and disable it during architecture evaluation. For architecture search, both the embedding and hidden size are set to 300. Word embedding is initialized from pretrained Glove (Pennington, Socher, and Manning 2014). We randomly initialize word vectors for words that do not appear in Glove. The batch size is 32 for the classification tasks and 16 for the others. The learning rate is 1e-4 for both network parameters and architecture parameters, and the weight decay is set to 1e-5, and the dropout rate is set to 0.5. Dropout is applied after embedding look-up, after encoding layer, and to the output layer. The learning rate is warmed up for 1000 steps and then it decreases linearly. The max number of epochs is set 60. We use Adam (Kingma and Ba 2015) to optimize both the network parameters and architecture parameters. The search takes around 1.5 GPU day (Tesla P100) for SST-1, and 0.3 for the SciTail tasks. We run each search configurations twice with different random seeds and pick the best cell based on the validation performance, and each search run in at most 3 different dicrete NN architectures. The hyper-parameters are the same with those in the search stage. Results on SST Results on SST-1 and SST-2 datasets are listed in Table 2. On the SST-1, DARTS generate a network architecture (DARTS-SST-1-V0) that performs better than most of the traditional NN models. Not that the encoder cell of DARTS-SST-1-V0 contains only RNN and CNN operations, but the exact details of combination of different level of features are impossible to design manually. The best ar- (Le and Mikolov 2014) 48.7 87.8 MT-LSTM (F2S) 49.1 87.2 Tree-LSTM (Tai, Socher, and Manning 2015) 51.0 88.0 CNN-Tensor (Lei, Barzilay, and Jaakkola 2015) 51.2 -BiLSTM + max pooling (Gong et al. 2018) 48.0 87.0 BiLSTM + average pooling (Gong et al. 2018) 46.2 85.2 BiLSTM + self-att (Gong et al. 2018) 48.2 86.4 BiLSTM + dynamic routing (Gong et al. 2018) 50.5 87.6 Emb + self-att (Shen et al. 2018) 48.9 -DiSAN (Shen et al. 2018) 51.7 -BiLSTM + self-att (Yoon, Lee, and Lee 2018) 50.4 88.2 CNN + self-att (Yoon, Lee, and Lee 2018) 50.6 88.3 Dynamic self-att (Yoon, Lee, and Lee 2018) 50.6 88.5 Transformer (Guo et al. 2019) 50 chitecture (DARTS-SST-2-V0) we obtained on the SST-2 dataset involves a star-transformer operation and an identity map. Note that since (Guo et al. 2019) did not provide on SST-2, we use the code from fastNLP 4 to run the transformer and the original star-transformer on SST-2. The given by us are all the average of 10 different runs. We can see that DARTS-SST-2-V0 can obtain comparable to the SOTA on SST-2. We also experiment on the transferability of the learned architectures. From Table 2, we can see that DARTS-SST-2-V0 performs worse than DARTS-SST-1-V0 on SST-1 with a significant margin, but DARTS-SST-1-V0 also performs competitively on SST-2. Results on NLI tasks Among the architecture candidates derived from the search on SciTail, we find that the one obtained by accepting the null operation when it gets the highest score (DARTS-SciTail-V0) performs best. In addition, this search run gives the average pooling as the aggregator instead of dynamic-routing. The are presented in Table 3: Test accuracy (%) on the SciTail dataset. Model ACC 600D ESIM 70.6 Decomposable Attention 72.3 DGEM 72.3 AdvEntuRe 79.0 HCRN (Tay, Luu, and Hui 2018) 80.0 DeIsTe (Yin, Schütze, and Roth 2018) 82.1 CAFE (Yin, Schütze, and Roth 2018) 83.3 MIMN 84.0 ConSeqNet 85.2 HBMP (Mihaylov et al. 2018) 86.0 star-transformer (Guo et al. 2019) 79 Table 3. DARTS-SciTail-V0 achieves a competitive performance on the test set, outperforming the baseline models such as ESIM and decomposable attention by a large margin. It also outperforms the of the star-transformer and transformer even after extensively parameters tuning. Our model is actually the best one that has no inter-sentence attentions other than the final interaction before the prediction layer, and uses no outside resources, no manually designed features and no extra training mechanism like adversarial training. As we can see from Figure 5 that, on the MedNLI dataset, the search gives out a architecture (DARTS-MedNLI-V0) that quite resembles the original implementation of the multi-head attention inside the transformer block, except the residual connection is replaced by a sep conv with kernel size 3. DARTS-MedNLI-V0 performs worse than the original star-transformer, but it is better than the original transformer, and the baseline ESIM and InferSent. We also look into the transferability between the two task. We find that although the datasets are from different domains, the architecture searched on one performs comparable on the other. For the NER task CoNLL2003, since our goal is to compare the previous models and the models discovered by NAS, we do not include the CRF layer which is proven to be a standard component of the best NER models (Lample et al. 2016). We do not use any outside resources like gazetteers, or manually designed features like suffix features and capitalization features, or charac- ter embedding. To eliminate implementation discrepancies, we re-run all the ourselves for this task. Figure 6 gives the searched architecture (DARTS-CoNLL2003-V0) and Table 5 gives out the . The LSTM sets a strong baseline, and the star-transformer and GRU performs significantly worse. Our DARTS-CoNLL2003-V0 works slightly better than LSTM. This paper addresses NAS for a series of NLU tasks. Corresponding to the encoder-aggregator architecture of typical NN models for NLU (Gong et al. 2018), we redefine the search space, by splitting it into encoder search space and aggregator search space. Our search strategy is based on DARTS (Liu, Simonyan, and Yang 2018) and P-DARTS. Experiments shows that architectures discovered by NAS achieves that are comparable to the previous SOTA models. In the further, we would like to investigate one-shot architecture search on more large-scale NLU tasks.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkgARFTUjB
Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models
Network embedding (NE) methods aim to learn low-dimensional representations of network nodes as vectors, typically in Euclidean space. These representations are then used for a variety of downstream prediction tasks. Link prediction is one of the most popular choices for assessing the performance of NE methods. However, the complexity of link prediction requires a carefully designed evaluation pipeline to provide consistent, reproducible and comparable . We argue this has not been considered sufficiently in recent works. The main goal of this paper is to overcome difficulties associated with evaluation pipelines and reproducibility of . We introduce EvalNE, an evaluation framework to transparently assess and compare the performance of NE methods on link prediction. EvalNE provides automation and abstraction for tasks such as hyper-parameter tuning, model validation, edge sampling, computation of edge embeddings and model validation. The framework integrates efficient procedures for edge and non-edge sampling and can be used to easily evaluate any off-the-shelf embedding method. The framework is freely available as a Python toolbox. Finally, demonstrating the usefulness of EvalNE in practice, we conduct an empirical study in which we try to replicate and analyse experimental sections of several influential papers. Link prediction is an important task with applications in a wide range of fields such as computer science, social sciences, biology, and medicine BID6 BID14 BID15 BID22. It amounts to estimating the likelihood for the existence of edges, between pairs of nodes that do not form an edge in the input graph. Many Network Embedding (NE) methods (e.g., BID0 BID2 BID5 BID8 BID10 BID12 BID17 BID18 BID19 have recently been applied to solving link prediction problems, showing promising . These methods map nodes in the network to vectors in IR d . This embedding is then used for a variety of tasks such as visualization, multi-label classification, clustering or link prediction. The challenges of evaluating NE methods for link prediction We argue that the practical performance of most NE methods is poorly understood and that experiments in many papers are difficult to compare due to variation in experimental setup and evaluation procedures. In this paper, we focus on a number of difficulties specific to the evaluation of NE methods for link prediction. Link prediction is a particularly challenging task to evaluate as it involve a number design choices, which can confound the and are prone to errors.1) Train-test splitting of graphs For example, a typical implicit assumption is that the input graph is not complete, and the purpose is to accurately predict the missing edges. To evaluate the performance of an NE method for link prediction, one thus needs an (incomplete) training graph along with a (more) complete version of that graph for testing. Much research has been devoted to determining the best approach to generate these training graphs BID6 BID14 BID22. Strong theoretical and empirical evidence suggest that in order to fairly evaluate link prediction methods, snapshots of the network at different points in time should be used for training and testing. In this way, the link prediction methods are tested on the natural evolutions of the networks. However, the availability of such snapshots is uncommon and raises additional questions, such as how to choose the time intervals for splitting the network. For these reasons, authors typically resort to sampling sets of edges from the input graphs and using the ing sub-graphs for training BID5 BID8 BID10 BID12. The remaining edges are used as positive test examples. The process of sampling edges is not standardized and varies between scientific works. The relative sizes of the train and test sets, for example, is a user-defined parameter which varies significantly. In BID8; BID10 the authors use a 50-50 train-test split, in BID5 ) a 60-40, in Lai et al. (2017 an 80-20 and in BID20 values ranging from 30-70 up to 80-20.A related problem is that, in addition to the 'positive' train and test edges, often also 'negative' edges (or non-edges) are required. Sometimes these are used to derive the embedding, while in other cases they are used only to train the classifier that predicts links. These sets of non-edges can be selected according to different strategies (Kotnis & Nastase) and can be of various sizes.2) From node embeddings to edge predictions Furthermore, most NE methods simply provide node embeddings. From these, edge embeddings need to be derived prior to performing predictions. There are several approaches for deriving edge embeddings which also seem to have a strong impact on the performance of different methods BID8.3) Evaluation measures Also the metrics used to evaluate the accuracy varies, e.g., from AUC-ROC BID10, to precision-recall BID21, to precision@k BID20. Finally, it appears to be common practice in recent literature to use recommended default settings for existing methods, while tuning the hyper-parameters for the method being introduced. When the recommended default settings were informed by experiments on other graphs than those used in the study at hand, this can paint an unduly unfavorable picture. For none of the above problems a gold standard exists which would clear the preferable choice. There are many valid choices and this, together with the fact that not all values of parameters and choices are usually reported have led to a situation where no one can see the forest for the trees. To address this problems, we propose EvalNE, a framework that simplifies the complex and time consuming process of evaluating NE methods for link prediction. EvalNE automates many parts of the evaluation process: hyper-parameter tuning, selection of train and test edges, negative sampling, and more. The framework: Implements guidelines from research in the area of link prediction evaluation and sampling BID6 BID14 BID22. Includes (novel) efficient edge and non-edge sampling algorithms. Provides the most widely used edge embedding methods. Evaluates the scalability and accuracy of methods, through wall clock time and a range of fixed-threshold metrics and threshold curves. Integrates in a single operation the evaluation of any number of NE methods coded in any language on an unbounded array of networks. Finally, EvalNE ensures reproducibility and comparability of the evaluations and . By making it easier to reproduce, compare, and understand evaluation pipelines, we regain the ability to assess the real strengths and weaknesses of existing NE methods, paving the way for more rapid and more reliable progress in the area. The remainder of this paper is organized as follows. In Section 2 we discuss related work. Section 3 presents the proposed evaluation framework including the novel edge samplig strategy. Empirical of attempts to reproduce evaluation setups of several papers and of the proposed edge sampling strategy are reported in Section 4. Finally Section 5 concludes this paper. The open-source toolbox is freely available at [scrapped for anonymity]. The evaluation of link prediction has been studied in several works BID6 BID14 BID16 BID22. These papers focus on the impact of different train set sampling strategies, negative sampling, and fair evaluations criteria. Link prediction as evaluation for the representations learned by a NE algorithm was first introduced in the pioneering work of BID8. The survey by BID23 points out the importance of link prediction as an application of network representation learning. The authors also signal the inconsistencies in the evaluation of different NE approaches and conduct an empirical analysis of many of these methods on a wide range of datasets. Their empirical study, however, only Figure 1: Diagram of the types of methods which can be evaluated using EvalNE. Gray blocks represent modules provided by the library and white blocks are the user-specified methods to be evaluated. The library allows for the evaluation of end-to-end prediction methods (several LP heuristics are included as baselines here), edge embedding methods, and node embedding methods.focuses on vertex classification and clustering. The importance of standard evaluation frameworks as tools to bridge the gap between research and application is discussed in BID9.To the best of out knowledge only two frameworks for the evaluation of NE methods currently exist. OpenNE is a recently proposed toolbox for evaluating NE methods on multi-label classification. The toolbox also includes implementations of several state-of-the-art embedding methods. GEM BID7 ) is a similar framework which also implements a variety of embedding methods and includes basic evaluation on multi-label classification, visualization, and link prediction tasks. These frameworks, however, are focused on the implementations of embedding methods rather than the evaluation pipeline. Furthermore, these libraries are limited to the NE methods provided by the authors or require new implementations which comply with pre-defined interfaces. In this section we discuss the key aspects of EvalNE. The framework has been designed as a pipeline of interconnected and interchangeable building blocks, as illustrated by Figure 1. The modular structure of our framework simplifies code maintenance and the addition of new features, and allows for flexible model evaluation. EvalNE can be used to evaluate methods providing node embeddings, edge embeddings, or similarity scores (we include in this category the link prediction heuristics). Next we describe the frameworks and its components in more detail as well as the software design. The core building blocks of EvalNE are the data split and model evaluation. These bocks constitute the most basic pipeline for assessing the quality of link predictions. However, in order to extend the evaluation process to other types of embedding methods we also provide building blocks for data manipulation and preprocessing, learning edge embeddings from node embeddings, binary classification and a range of LP heuristics which can be used as baselines. Before presenting in detail each of these building blocks we introduce some needed notation. We represent an undirected weighted network as G = (V, E, W) with vertex set V = {1, . . ., N}, edge set E ✓ V ⇥ V and weight matrix W 2 IR N⇥N. Edges are represented as unordered pairs e = (u, v) 2 E with weights w e 2. E train and E test denote the training and testing edge sets. We represent a d-dimensional node embedding as X = (x 1, x 2, . . ., x N) where X 2 IR N⇥d.Preprocessing The toolbox offers a variety of functions to load, store, and manipulate networks. These include methods to prune nodes based on degree, remove self-loops, relabel nodes, obtain sets of specific types of edges, restrict networks to their main connected components and obtain common network statistics. The preprocessing functions of EvalNE build on top of and can be used in combination with those provided by other mainstream software packages (e.g. Networkx).Data split As pointed out in Section 1, in order to perform link prediction on a given input graph G, sets of train and test edges are required. The set of training edges is generally required to span all nodes in G and induce a train graph G train with a single connected component, because embeddings of independent components will be far away from and unrelated to each other. Most studies resort to a naive algorithm for deriving a connected G train. The procedure removes edges from an input graph iteratively until the required number of train edges remain. The removed edges are used as test samples. In each iteration, the connectedness of the graph is checked and an edge is only removed if it does not cause the graph to become disconnected. This requirement is generally satisfied by running a Breadth First Search (BFS) on the graph after each edge removal, which is a costly operation (O(|V | + |E|)).Integrated in our evaluation framework, we include a novel algorithm to perform the train-test splits which, as we will show in Section 4, is orders of magnitude faster yet equally simple. Our algorithm, also accounts for the fact that the training graph G train must span all nodes in G and contain a single connected component. Given as input an undirected graph G = (V, E, W) and a target number of train edges m, the proposed algorithm proceeds as follows:1. Obtain a uniform spanning tree ST of G 2. Initialize the set of training edges E train to all edges in ST 3. Add m |E train | edges to E train selected uniformly at random wihout replacement from DISPLAYFORM0 We select a spanning tree uniformly at random from the set of all possible ones using Broder's algorithm BID1 ):1. Select a random vertex s of G and start a random walk on the graph until every vertex is visited. For each vertex i 2 V \ s collect the edge e = (j, i) that corresponds to the first entrance to vertex i. Let T be this collection of edges. 2. Output the set T.On expectation, the complexity of the uniform spanning tree generation is O(n log n) BID1 and the addition of random edges can be efficiently done in O(|E train |). DISPLAYFORM1 we first construct an equivalent undirected version G by adding reciprocals for every edge in E ⇤. We then run the same algorithm described above on G and include in the training set only those edges present in the initial directed graph G ⇤. This method in a weakly connected train graph spanning the same nodes as the original G ⇤.In addition to train and test edges, sets of train and test non-edges (also referred to as negative samples) are required in order to evaluate link prediction. These are edges between pairs of vertices u, v such that e = (u, v) / 2 E. The proposed toolbox can compute these non-edges according to either the open world or the closed world assumption. The two strategies only differ in the selection of the train non-edges. Under the open world assumption, train non-edges are selected such that they are not in E train. Thus, this strategy allows overlapping between train non-edges and test real edges. Under the closed world assumption, we consider the non-edges to be known a priori and therefore select them so that they are neither in E train nor in E test.The number of train and test edges and non-edges are user-defined parameters. For the train set size, fractions between 50% and 90% of total edges E of G are recommended. For values below 50%, the ing train graph will often not preserve the properties of G BID13. The LP heuristics take as input a train graph G train = (V, E train, W) spanning the same set of vertices as the initial graph G but containing only the edges in the train set and output scores of node similarity which can be directly used for link prediction. EvalNE includes the following heuristics: common neighbours (CN), Jaccard coefficient (JC), Adamic-Adar index (AA), resource allocation index (RA), preferential attachment (PA) and Katz. In addition to these, a random prediction model is provided for reference. This method simply outputs a uniform random value in as the likelihood of a link between any pair of given vertices (u, v).For the case of undirected graphs we use the usual definitions of the heuristics (see Appendix A 1) and for directed graphs we restrict our analysis to either the in-neighbourhood (i (u)) or the outneighbourhood (o (u)) of the nodes u 2 V.Node to edge embedding Unlike for the abovementioned LP heuristics where the output can be directly used for link prediction, for NE methods this is generally not the case. Most authors only 1 https://www.dropbox.com/sh/8whq0di1sb9pz8m/AADl1DIrWRWOjtwuVum_TxnGa?dl=0 release code to compute node embeddings. Thus, an additional step of learning edge embeddings is required in order to perform link predictions via binary classification. The edge representations can be learned in an unsupervised fashion, for any given edge e = (u, v), by applying a binary operator over the embeddings of nodes u and v, i.e. x u and x v respectively: BID8 propose the following alternatives for the operator which we include in our evaluation framework: average, hadamard, weighted L 1 and weighted L 2 (See Appendix B for equations). Additional user-defined operators can be also easily integrated. DISPLAYFORM0 Binary classification Most NE methods (e.g., BID5 BID8 BID10 BID12 rely on a logistic regression classifier to predict the probability of links given the edge embeddings. In EvalNE we implement logistic regression with 10-fold cross validation. The framework, however, is flexible and allows for any other binary classifier to be used. Evaluation The proposed framework can evaluate the scalability, parameter sensitivity and accuracy of embedding methods. We asses the scalability directly by measuring wall clock time. The performance can be easily reported for different values of embedding dimensionality and hyperparameter values. Finally, the link prediction accuracy is reported using two types of metrics: fixedthreshold metrics and threshold curves. Fixed-threshold metrics summarize method performance to single values. EvalNE provides the following: confusion matrix (TP, FN, FP, TN), precision, recall, fallout, miss, accuracy, F-score and AUC-ROC. Threshold curves present the performance of the methods for a range of threshold values between 0 and 1. EvalNE includes precision-recall and receiver operating curves BID4. The framework provides recommendations of the most suitable metrics based on the evaluation setup. The EvalNE framework is provided as a Python toolbox compatible with Python2 and Python3 which can run on Linux, MacOS, and Microsoft Windows. The toolbox depends only on a small number of popular open-source Python packages, and the coding style and documentation comply with strict formats. The documentation provided contains instructions on the installation, examples of high-level and low-level use and integration with existing code. EvalNE can be used both as a command line tool and an API. As a command line tool it exposes a configuration file which determines the complete experimental setup a user wants to test, including methods to evaluate, networks and edge splits. For convenience, several pre-filled configuration files that reproduce experimental sections of influential NE papers are provided. When used as an API, the framework exposes a modular design with blocks providing independent and self contained functionalities. The user interacts with the library through an evaluator object that integrates all the building blocks and orchestrates the method evaluation pipeline. This section aims to demonstrate the usefulness and flexibility of EvalNE. To this end we have selected and replicated the experimental sections of four papers from the NE literature, i.e. Node2vec BID8, CNE BID10, PRUNE BID12 and SDNE BID20. We also report experiments comparing the proposed edge sampling algorithm to the naive approach. Experimental setups The experimental settings we aimed to replicate are as follows:Node2vec This work used the following embedding methods and link prediction heuristics in the evaluation: Node2vec, DeepWalk, LINE, Spectral Clustering, CN, JC, AA, and PA. Experiments are performed on the Facebook, PPI, and AstroPh datasets with 50-50 train-test splits. The same number of edges and non-edges are used throughout the experiment and the edge embedding methods used are average, hadamard, weighted L 1 and weighted L 1. Results are reported in terms of AUC-ROC. sizes for the non-edge sets. The node to edge embedding operator used is not reported by the authors and the are presented in terms of AUC-ROC. In this setting, the network used are directed. CNE The evaluation setup is like the Node2vec paper, with the following differences: CNE and Metapath2vec are also evaluated; and evaluation is done also on the BlogCatalog, Wikipedia, and StudentDB networks. Results are reported for the hadamard operator only. This paper comes from a research group that we have occasional collaborations with. We had access to the full code for their evaluation pipeline, but the EvalNE codebase was implemented completely separately. SDNE The paper reports experiments for the following NE methods: SDNE, LINE, DeepWalk, GraRep, LapEig and CN. The link prediction experiments are performed on the GR-QC dataset with a 0.85 train-test fraction. The are reported only for the hadamard operator and in terms of precision@k for a colection of values between 2 and 10000. The number of train non-edges is the same as that of train edges while all the remaining non-endges in the graph are used for testing. We replicated these setting with the following exceptions for node2vec we did not include spectral clustering and for PRUNE we could not obtain NRCL and lacked the computational resources to evaluate the webspam dataset. In all cases we report the same metrics as the original papers and, except were reported differently, average the over three repetitions. We use the closed-world assumption for the non-edges, logistic regression for binary classification and tuned the hyper-parameters equivalently as reported in each paper. All experiments were run using specific configuration files created for each setting which will be made public. Regarding the implementations, we evaluated the LP heuristics included in EvalNE; original code by the authors for Deepwalk, Node2vec, LINE, PRUNE, CNE, and Metapath2vec, and for the remaining ones, the implementations in the OpenNE 2 library (See Appendix C for references to exact implementations). Table 1 contains the main characteristics of all the networks used. Network sizes are those corresponding to the main connected components. For each of the experimental settings reproduced we present a table containing the original values reported in the paper, and between parentheses the difference between our and this value TAB2. Positive values in parentheses thus indicate that the our are higher than the ones reported in the original papers by that margin, while negative values indicate the opposite. The obtained show that our experiments align fairly well with those reported in the CNE and PRUNE papers. The only exception is the of Metapath2vec on the Facebook dataset, which substantially differs from the CNE paper. For Node2vec and SDNE, the differences are larger and occasionally severe (differences over 0.15 are marked in bold in all tables). Possible explanation for this are the use of different implementations of the NE methods, different not reported default parameters or the use of parallelization. We have studied the effect of each of these possible causes and found 1) important differences in accuracy when comparing different implementations of LINE and Node2vec on identical settings, 2) different initial default parameters for distinct implmentations of deepwalk, and 3) performance degradation for Metapath2vec when using parallelization (See Appendix D for more details).In addition, we have also observed important differences when computing the AUC-ROC directly from class labels or from class probabilities. In order to reproduce the CNE experiments we used class probabilities (which is the appropriate choice), while for Node2vec class labels appear to have been used. These illustrate the need to: (a) create reproducible pipelines for experiments, and (b) report specifics about the parameter settings and precise implementations used in the evaluation. We now present our comparing the proposed edge and non-edge sampling strategy in terms of accuracy and scalability to the naive approach. For the accuracy experiment we selected the Facebook, PPI, and arXiv datasets. We used both the proposed and naive edge split strategies and performed link predictions with the CN, JC, AA, and PA heuristics. We output all the metrics available in EvalNE and compute their absolute difference. Recall is the metric presenting the highest deviation so we collect these in TAB1. For the AUC-ROC, one of the most widely used metrics, the maximum deviation is of 0.01 reached on the arXiv dataset by the PA heuristic. Consistently, across methods and datasets, the obtained with the naive approach were slightly higher than those using our method. Regarding the scalability experiments, in FIG2 we summarize the execution time in seconds of both methods on four different datasets. This experiment shows that the proposed method is several order of magnitude faster than the naive approach and independent on the number of train and test edges required. The execution time of the naive approach further increases as more test edges are required. In FIG3 we select the BlogCatalog dataset and restrict it to sub-graphs of different prec@100 prec@200 prec@300 prec@500 prec@800 prec@1000 prec@10000 SDNE 1 (0.00) 1 (0 number of nodes ranging from 400 up to 10000. The execution times using both sampling strategies are reported in FIG3 .For our proposed edge sampling we also evaluated the variance in method performance for different number of experiment repetitions (or equivalently, the effect of averaging the over different number of edge splits). In this case we used the same datasets, NE methods and LP heuristics as in the Node2vec experiment and 50-50 and 90-10 train-test splits. We compared the obtained in a single run with those averaged over 5 independent runs for both split fractions. For the 50-50 train-test split the average difference observed over all methods, datasets and metrics is 3.4 · 10 3 with a variance of 1.8 · 10 5 and a maximum difference of 0.0293. For the 90-10 split, the average difference is 3.0 · 10 3 the variance of 1.0 · 10 5 and maximum difference 0.0186. These indicate that a single train and test split already gives a sufficiently good estimate of the generalization error of the models evaluated. Thus, experiment repeats are not needed for networks of similar sizes. These observations seem to hold for different train-test split fractions. The recent surge of research in the area of network embeddings has ed in a wide variety of data sets, metrics, and setups for evaluating and comparing the utility of embedding methods. Comparability across studies is lacking and not all evaluations are equally sound. This highlights the need for specific tools and pipelines to ensure the correct evaluation of these methods. Particularly, the use of representation learning for link prediction tasks requires train and test sampling, non-edge sampling, and in many cases selection of edge embedding methods and binary classifiers. The evaluation procedure, thus, becomes an ensemble of tasks which allow for many errors or inconsistencies. In this work we have proposed EvalNE, a novel framework that can be used to evaluate any network embedding method for link prediction. Our pipeline automates the selection of train and test edge sets, simplifies the process of tuning model parameters and reports the accuracy of the methods according to many criteria. Our experiments highlight the importance of the edge sampling strategy and parameter tuning for evaluating NE methods. We have also introduced a scalable procedure to select edge sets from given networks and showed empirically that is orders or magnitude faster than the naive approaches used in recent literature.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1eJH3IaLN
In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results.
Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this. A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization. We focus on a simple neural network two-layer ReLU network with two hidden units, and show that all local minimizers are global. This combined with recent work of; show that gradient descent converges to the global minimizer. For the duration of this paper, we will assume that x is standard normal in R n and all expectations are with respect to the standard normal. The population loss function is: DISPLAYFORM0 Define DISPLAYFORM1 so the loss can be rewritten as (ignoring additive constants, then multiplied by 4): DISPLAYFORM2 g(w i, w j) − 2g(w i, w * j).From BID0 we get DISPLAYFORM3 and DISPLAYFORM4 In this paper, we study the landscape of f over the manifold R = {w 1 = w 2 = 1}. The manifold gradient descent algorithm is: DISPLAYFORM5 where P R is the orthogonal projector onto the manifold R, and ∇ R is the manifold gradient of f. In order to analyze the global convergence of manifold gradient descent, we need a characterization of all critical points. We show that f (W) have no spurious local minimizer on the manifold R. Theorem 4.1. Assume wThe next theorem shows that manifold gradient descent with random initialization converges to the global minimizer Theorem 4.2. With probability one, manifold gradient descent will converge to the global minimizers. Proof. The objective function f is infinitely differentiable on manifold R. Using Corollary 6 of BID16, manifold gradient descent will converge to a local minimizer with probability one. Since the only local minima for function f are w 1 = wThe second observation is we only need to compute the gradient on the manifold and check whether it's zero. Define m(w 1) = sin θ 1 ∂f ∂w11 − cos θ 1 ∂f ∂w12 and m(w 2) = sin θ 2 ∂f ∂w21 − cos θ 2 ∂f ∂w22. Then for w 1 and w 2, the norm of the manifold gradients are |m(w 1)| and |m(w 2)|. Thus, we only need to check whether the value of function m is 0 and get rid of the absolute value sign. Then we apply the polar coordinates onto the manifold gradients, and obtain: m(w 2) = 1 π (π − θ w1,w2) sin(θ 2 − θ 1) + cos θ 2 − sin θ 2 + 1 π θ w2,w * 1 sin θ 2 − θ w2,w * 2 cos θ 2.The last observation we need for this theorem is that we must divide this problem into several cases because each angle in is a piecewise linear function. If we discuss each case independently, the ing functions are linear in the angles. The details are in Appendix B. After the calculation of all cases, we found the positions of all the critical points: WLOG assume θ 1 ≤ θ 2, then there are four critical points in the 2D case: (θ 1, θ 2) = (0,). After finding all the critical points, we compute the manifold Hessian matrix for those points and show that there is a direction of negative curvature. The details can be found in Appendix C. The next step is to reduce it to a three dimensional problem. As stated in the two-dimensional case, the gradient is in span{w 1, w 2, w * 1, w * 2}, which is four-dimensional. However, using the following lemma, we can reduce it to three dimensions and simplify the whole problem. Lemma 4.4. If (w 1, w 2) is a critical point, then there exists a set of standard orthogonal basis (e 1, e 2, e 3) such that e 1 = w Even if we simplify the problem into three dimensional case, it still seems to be impossible to identify all critical points explicitly. Our method to analyze the landscape of the loss surface is to find the properties of critical points and then show all saddle points and local maximizers have a direction of negative curvature. The following two lemmas captures the main geometrical properties of the critical points in three dimensional case. More detailed properties are given is Section 5.2 Lemma 4.5.arccos(−w 11) arccos(−w 21) = arccos(−w 12) arccos(−w 22) = − w 23 w 13.The ratio in Lemma 4.5 captures an important property of all critical points. For simplicity, based on D.5, we define DISPLAYFORM0.Then from the properties of θ 1, θ 2 and upper bound the value of k 0 we get Lemma 4.6. θ 1 = θ 2.That lemma shows that w 1 and w 2 must be on a plane whose projection onto span{w * 1, w * 2} is the bisector of w * 1 and w * 2. Combining this with the computation of Hessian, we conclude that we have found negative curvature for all possible critical points, which leads to the following proposition. Here we provide some detailed proofs which are important for the understanding of the main theorem. In general case, the following lemma shows we only need three dimension. Lemma 5.1. If (w 1, w 2) is a critical point, then there exists a set of standard orthogonal basis (e 1, e 2, e 3) such that e 1 = w * 1, e 2 = w * 2 and w 1, w 2 lies in span{e 1, e 2, e 3}.Proof. If (w 1, w 2) is a critical point, then DISPLAYFORM0 where matrix (I − w 1 w T 1) projects a vector onto the tangent space of w 1. Since DISPLAYFORM1 we get DISPLAYFORM2 which means that DISPLAYFORM3 )w * 2 lies in the direction of w 1. If θ w1,w2 = π, i.e., w 1 = −w 2, then of course the four vectors have rank at most 3, so we can find the proper basis. If θ w1,w2 < π, then we know that there exists a real number r such that DISPLAYFORM4 Since θ w1,w2 < π, we know that the four vectors w 1, w 2, w * 1 and w * 2 are linear dependent. Thus, they have rank at most 3 and we can find the proper basis. Next we will focus on the properties of critical points. Assume (w 1, w 2) is one of the critical points, from lemma D.1 we can find a set of standard orthogonal basis (e 1, e 2, e 3) such that e 1 = w * 1, e 2 = w * 2 and w 1, w 2 lies in span{e 1, e 2, e 3}. Furthermore, assume w 1 = w 11 e 1 + w 12 e 2 + w 13 e 3 and w 2 = w 21 e 1 + w 22 e 2 + w 23 e 3, i.e., w 1 = (w 11, w 12, w 13) and w 2 = (w 21, w 22, w 23). Since we have already found out all the critical points when w 13 = w 23 = 0, in the following we assume w Proof. Adapting from the proof of lemma D.4 and we know that DISPLAYFORM0 Similarly, we have DISPLAYFORM1 Taking the first component of FORMULA0 and FORMULA0 gives us DISPLAYFORM2 DISPLAYFORM3 Thus, DISPLAYFORM4 Similarly, we get DISPLAYFORM5 Since ∀i, j ∈, π − θ wi,w * j = arccos(−θ wij), we know that DISPLAYFORM6 Using this equation, we obtain several properties of critical points. The following two lemmas show the basic properties of critical points in three dimensional case. Completed proofs are given in Appendix B and C. Lemma 5.3. θ w1,w2 < π. Lemma 5.4. w 13 * w 23 < 0.These two lemmas restrict the position of critical points in some specific domains. Then we construct a new function F in order to get more precise analysis. Define DISPLAYFORM7 From the properties of that particular function and upper bound the value of k 0 we get Lemma 5.5. θ 1 = θ 2.That lemma shows that w 1 and w 2 must be on a plane whose projection onto span{w * 1, w * 2} is the bisector of w * 1 and w * 2. Although we cannot identify the critical points explicitly, we will show these geometric properties already capture the direction of negative curvature. In this section, we partially characterize the structure of the critical points when w * 1, w * 2 are nonorthogonal, but form an acute angle. In other words, the angle between w * 1 and w * 2 is α ∈ (0, π 2). Let us first consider the 2D cases, i.e., both w 1 and w 2 are in the span of w * 1 and w * 2. Similar to the original problem, after the technique of changing variables(i.e., using polar coordinates and assume θ 1 and θ 2 are the angles of w 1 and w 2 in polar coordinates), we divide the whole plane into 4 parts, which are the angle in [0, α], [α, π], [π, π + α] and [π + α, 2π). We have the following lemma: Lemma 6.1. Assume w * 1 = w * 2 = 1, w * T 1 w * 2 > 0 and w 1, w 2 ∈ span{w * 1, w * 2}. When w 1 and w 2 are in the same part(one of four parts), the only critical points except the global minima are those when both w 1 and w 2 are on the bisector of w * 1 and w * 2.Proof. The complete proof is given in appendix E, the techniques are nearly the same as things in the original problem and a bit harder, so to be brief, we omit the proof details here. For the three-dimensional cases cases of this new problem, it's interesting that the first few lemmatas are still true. Specifically, Lemma D.1(restated as Lemma 4.4) to Lemma D.5(restated as Lemma 4.5) are still correct. The proof is very similar to the proofs of those lemmas, except we need modification to the coefficients of terms in the expressions of the manifold gradients. We did experiments to verify the theoretical . Since our are restricted to the case of K = 2 hidden units, it is also natural to investigate whether general two-layer ReLU networks also have the property that all local minima are global minima. Unfortunately as we show via numerical simulation, this is not the case. We consider the cases of K from 2 to 11 hidden units and we set the dimension d = K. For each K, the true parameters are orthogonal to each other. For each K, we run projected gradient descent with 300 different random initializations, and count the number of local minimum (critical points where the manifold Hessian is positive definite) with non-zero training error. If we reach a sub-optimal local minimum, we can conclude the loss surface exhibits spurious local minima. The bar plot showing the number of times gradient descent converged to spurious local minima is in FIG2. From the plot, we see there is no spurious local minima from K = 2 to K = 6. However for K ≥ 7, we observe a clear trend that there are more spurious local minima when there are more hidden units. In this paper, we provided recovery guarantee of stochastic gradient descent with random initialization for learning a two-layer neural network with two hidden nodes, unit-norm weights, ReLU activation functions and Gaussian inputs. Experiments are also done to verify our . For future work, here we list some possible directions. This paper focused on a ReLU network with only two hidden units,. And the teaching weights must be orthogonal. Those are many conditions, in which we think there are some conditions that are not quite essential, e.g., the orthogonal assumption. In experiments we have already seen that even if they are not orthogonal, it still has some good properties such as the positions of critical points. Therefore, in the future we can further relax or abandon some of the assumptions of this paper and preserve or improve the we have. The neural network we discussed in this paper is in some sense very simple and far from practice, although it is already the most complex model when we want to analyze the whole loss surface. By experiments we have found that when it comes to seven hidden nodes with orthogonal true parameters, there will be some bad local minima, i.e., there are some local minima that are not global. We believe that research in this paper can capture the characteristics of the whole loss surface and can help analyze the loss surface when there are three or even more hidden units, which may give some bounds on the performance of bad local minima and help us understand the specific non-convexity of loss surfaces. Consider a neural network with 2 hidden nodes and ReLU as the activation function: DISPLAYFORM0 where σ(x) = max(0, x) is the ReLU function. First we study the 2-D case, i.e., the input and all parameters are two dimensional. Assume that the input follows standard normal distribution. The loss function is population loss: DISPLAYFORM1 Define DISPLAYFORM2 then from BID0 we get DISPLAYFORM3 Thus, DISPLAYFORM4 Moreover, from FORMULA1 we get DISPLAYFORM5 Assume w * 1 = w * 2 and w * T 1 w * 2 = 0. WLOG, let e 1 = w * 1 and e 2 = w * 2. Then we know that ∀i, j ∈, g(w * i, w * j) is a constant number. Thus, define the objective function(which equals to 4l(W) up to an additive constant) DISPLAYFORM6 Thus, DISPLAYFORM7 Similarly, for w 2, the gradient is ∂f DISPLAYFORM8 Assume that w 1 = (w 11, w 12) and w 2 = (w 21, w 22), then the gradient can be expressed in this form: DISPLAYFORM9 and DISPLAYFORM10 Because of symmetry, for w 2, the gradient is DISPLAYFORM11 and DISPLAYFORM12 B CRITICAL POINTS IN 2D CASES In 2D cases, we can translate W to polar coordinates and fix w 1 = w 2 = 1, so there are two variables left: θ 1 and θ 2, i.e., w 1 = (cos θ 1, sin θ 1) and w 2 = (cos θ 2, sin θ 2).For manifold gradient, we only need to consider its norm and check whether it's zero. Under review as a conference paper at ICLR 2018To make life easier, it's better to simplify the m functions a bit using w 1 = (cos θ 1, sin θ 1) and w 2 = (cos θ 2, sin θ 2): DISPLAYFORM0 Similarly, DISPLAYFORM1 Then we can divide them into several cases and analyze them one by one to specify the positions and properties of the critical points. DISPLAYFORM2 The norm of the manifold gradient w.r.t. w 1 is DISPLAYFORM3 Similarly, the norm of m(w 2) is DISPLAYFORM4 Define DISPLAYFORM5 If m(w 1) = m(w 2) = 0, then DISPLAYFORM6 and DISPLAYFORM7 Thus, DISPLAYFORM8 Also note that FORMULA3 and we get DISPLAYFORM9 DISPLAYFORM10 and the inequality becomes equality only then DISPLAYFORM11 ≥ 0. Note that is because cos(2θ) − cos θ is always non-positive when 0 ≤ θ ≤ DISPLAYFORM12 In a word, there are two critical points in this case: DISPLAYFORM13 The norm of the manifold gradient w.r.t. w 1 is DISPLAYFORM14 Similarly, DISPLAYFORM15 Define DISPLAYFORM16 DISPLAYFORM17 and the inequality becomes equality only then θ = π 2 or θ = π. DISPLAYFORM18 Note that the inequality becomes equality only when θ cos θ = 0 and DISPLAYFORM19 and DISPLAYFORM20 Thus, DISPLAYFORM21 However, we know that h 2 (θ 1) < 0 and h 2 (θ 1) < 0, which makes a contradiction. In a word, there is no critical point in this case. DISPLAYFORM22 The norm of the manifold gradient w.r.t. w 1 is DISPLAYFORM23 Similarly, the norm of m(w 2) is DISPLAYFORM24 Define DISPLAYFORM25 Let θ = θ + π, then DISPLAYFORM26 DISPLAYFORM27 Moreover, ∀θ ∈ [π, DISPLAYFORM28 DISPLAYFORM29 DISPLAYFORM30 = 0.Also, when θ ∈ [π, DISPLAYFORM31 so h 3 is an increasing function when θ ∈ [π, From Lemma B.1, DISPLAYFORM32 DISPLAYFORM33 DISPLAYFORM34 DISPLAYFORM35 Note that becomes equality only when θ 1 = π or θ 1 = DISPLAYFORM36 Actually, this is symmetric to the B.3, so in this part I would like to specify this kind of symmetry. We have already assumed that θ 1 ≤ θ 2 without loss of generality, and under this assumption, we can find another symmetry: From w 1 and w 2, using line y = x as symmetry axis, we can get two new vectors w 1 and w 2. w 1 is not necessarily the image of w 1 because we need to preserve the assumption that θ 1 ≤ θ 2, but there exists one and only one mapping such that θ 1 ≤ θ 2. In this kind of symmetry, the angles, including θ w1,w2 and θ wi,w * j where i, j ∈, are the same, so the two symmetric cases share the same gradients, thus the symmetric critical points. We use (i, j),where i, j ∈, to represent the case that θ 1 is in the ith quadrant and θ 2 is in the jth one. Using this kind of symmetry, we conclude that is equivalent to and is equivalent to, so there are 4 cases left which are,, and. DISPLAYFORM37 Similar to previous cases, DISPLAYFORM38 and m(w 2) = 1 π (π − θ 2 + θ 1) sin(θ 2 − θ 1) + cos θ 2 − sin θ 2 DISPLAYFORM39 Using previous definitions, we conclude that DISPLAYFORM40 DISPLAYFORM41 If m(w 1) = m(w 2) = 0, then m (w 1) + m(w 2) = 0, i.e., DISPLAYFORM42 From we know that DISPLAYFORM43 Thus, using lemma B.2, DISPLAYFORM44 That means the only case that h 1 (θ 1) + h 2 (θ 2) = 0 is when the inequality becomes equality, which means that cos θ 1 = 1 and h 2 (θ 1 + π 2) = h 2 (θ 2) = − 1 2. Thus, we must have θ 1 = 0, and θ 2 = π 2 or θ 2 = π. Plugging them back in FORMULA0 and FORMULA0, we can verify that the first one is a critical point while the other is not. Since (θ 1, θ 2) = (0, π 2) has been counted in case 1, there are no new critical points in this case. DISPLAYFORM45 Similar to previous cases, DISPLAYFORM46 and m(w 2) = 1 π (π − θ w1,w2) sin(θ 2 − θ 1) + cos θ 2 − sin θ 2 DISPLAYFORM47 Thus, using previous definitions DISPLAYFORM48 and DISPLAYFORM49 If m(w 1) = m(w 2) = 0, then m(w 1) + m(w 2) = 0, i.e., DISPLAYFORM50 DISPLAYFORM51 Then we have the following lemma: Lemma B.3. When 0 ≤ θ ≤ Proof. From, h 3 (θ + π) = h 1 (θ) − cos θ + sin θ. Thus, H(θ) = 2h 1 (θ) − cos θ + sin θ DISPLAYFORM52 When 0 ≤ θ ≤ π 4, since sin θ is a concave function for θ, we know that DISPLAYFORM53 Thus, DISPLAYFORM54 To make H(θ) = 0, we must have DISPLAYFORM55 Thus, DISPLAYFORM56 And to make H(θ) = 0, the only possibility is θ = π 2, which ends the proof. Remember that if m(w 1) = m(w 2) = 0, then we have h 3 (θ 2) = −h 1 (θ 1).If h 1 (θ 1) > 0, i.e., 0 ≤ θ 1 < π 4, then from lemma B.3, H(θ 1) ≤ 0, which means that DISPLAYFORM57 Since h 3 is a strictly increasing function, we know that if h 3 (θ 2) = −h 1 (θ 1), then θ 2 ≥ θ 1 + π, so sin(θ 1 − θ 2) ≥ 0, and that means DISPLAYFORM58 Similarly, if h 1 (θ 1) < 0, i.e., DISPLAYFORM59 Thus, if h 3 (θ 2) = −h 1 (θ 1), then θ 2 ≤ θ 1 + π, so sin(θ 1 − θ 2) ≤ 0, and that means DISPLAYFORM60 The last possibility is h 1 (θ 1) = 0, i.e., θ 1 = π 4. Plugging it into and we know that h 3 (θ 2) = 0, so θ 2 = 5π 4. And that is indeed a critical point. In a word, the only critical point in this case is (θ 1, θ 2) = (DISPLAYFORM61 Like previous cases, DISPLAYFORM62 If m(w 1) = m(w 2) = 0, then m (w 1) + m(w 2) = 0, i.e., DISPLAYFORM63 Let θ = θ 2 − π, then from and FORMULA0, we know that DISPLAYFORM64 Thus, from lemma B.2, DISPLAYFORM65 Therefore, in order to achieve h 2 (θ 1) + h 3 (θ 2) = 0, the only way is let becomes equality, which means that θ 2 = 3π 2 and θ 1 = π 2 or π. Plugging them into FORMULA0 and FORMULA0 we conclude that both of them are not critical points. In a word, there is no critical point in this case. DISPLAYFORM66 Similar to previous cases, DISPLAYFORM67 and m(w 2) = 1 π (π − θ w1,w2) sin(θ 2 − θ 1) + cos θ 2 − sin θ 2 DISPLAYFORM68 2, we must have θ w1,w2 = π 2, so it must be true that DISPLAYFORM69 Therefore, using lemma B.2, DISPLAYFORM70 In a word, there is no critical point in this case. In , based on the assumption that θ 1 ≤ θ 2 there are four critical points in the 2D case: Assume the manifold is R = {(w 1, w 2): w 1 2 = w 2 2 = 1}, then the Hessian on the manifold is DISPLAYFORM0 DISPLAYFORM1 where z = (z 1, z 2) satisfies w DISPLAYFORM2 and DISPLAYFORM3 Then we can get when w 1 = w 2 and w 1 = −w 2, DISPLAYFORM4 So this point is a saddle point. In , we have four critical points: one is global maximal, the other three are saddle points. DISPLAYFORM0 ) is a critical point, then there exists a set of standard orthogonal basis (e 1, e 2, e 3) such that e 1 = w * 1, e 2 = w * 2 and w 1, w 2 lies in span{e 1, e 2, e 3}.Proof. If (w 1, w 2) is a critical point, then DISPLAYFORM1 where matrix (I − w 1 w T 1) projects a vector onto the tangent space of w 1. Since DISPLAYFORM2 we get DISPLAYFORM3 DISPLAYFORM4 )w * 2 lies in the direction of w 1. If θ w1,w2 = π, i.e., w 1 = −w 2, then of course the four vectors have rank at most 3, so we can find the proper basis. If θ w1,w2 < π, then we know that there exists a real number r such that DISPLAYFORM5 Since θ w1,w2 < π, we know that the four vectors w 1, w 2, w * 1 and w * 2 are linear dependent. Thus, they have rank at most 3 and we can find the proper basis. Next we will focus on the properties of critical points. Assume (w 1, w 2) is one of the critical points, from lemma D.1 we can find a set of standard orthogonal basis (e 1, e 2, e 3) such that e 1 = w * 1, e 2 = w * 2 and w 1, w 2 lies in span{e 1, e 2, e 3}. Furthermore, assume w 1 = w 11 e 1 + w 12 e 2 + w 13 e 3 and w 2 = w 21 e 1 + w 22 e 2 + w 23 e 3, i.e., w 1 = (w 11, w 12, w 13) and w 2 = (w 21, w 22, w 23). Since we have already found out all the critical points when w 13 = w 23 = 0, in the following we assume w Proof. If θ w1,w2 = π, then w 1 = −w 2, so w 2 is in the direction of w 1. We have already known from DISPLAYFORM0 )w * 2 lies in span{e 1, e 2}, so w 1 ∈ span{e 1, e 2} and w 2 ∈ span{e 1, e 2}. Thus, w 13 = w 23 = 0 and that contradicts with the assumption. In a word, θ w1,w2 < π. Lemma D.3. w 13 * w 23 = 0.Proof. We have already known from DISPLAYFORM1 lies in the direction of w 1. Writing it in each dimension and we know that there exists a real number r 0 such that DISPLAYFORM2 From lemma D.2 we know that θ w1,w2 < π, so we can define DISPLAYFORM3.Then the equations become DISPLAYFORM4 Similarly, we have DISPLAYFORM5 Since w 2 13 + w 2 23 = 0, at least one of those two variables cannot be 0. WLOG, we assume that w 13 = 0. If w 23 = 0, then from we know that w 13 = 0, which contradicts the assumption. Thus, w 23 = 0, which means that w 13 * w 23 = 0.Lemma D.4. w 13 * w 23 < 0.Proof. Adapting from the proof of lemma D.3, we know that DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 and DISPLAYFORM9 DISPLAYFORM10 DISPLAYFORM11 Furthermore, kk = From lemma D.2 we know that θ w1,w2 < π, and from lemma D.3 we know that both w 1 and w 2 are DISPLAYFORM12 > 0. Therefore, we have DISPLAYFORM13 That means k < 0, so Proof. Adapting from the proof of lemma D.4 and we know that DISPLAYFORM14 Similarly, we have DISPLAYFORM15 Taking the first component of FORMULA0 and FORMULA0 gives us DISPLAYFORM16 DISPLAYFORM17 Thus, DISPLAYFORM18 Similarly, we get DISPLAYFORM19 Since ∀i, j ∈, π − θ wi,w * j = arccos(−θ wij), we know that DISPLAYFORM20 For simplicity, based on D.5, we define k 0 = −k, θ 1 = π − θ w2,w * 1 and θ 2 = π − θ w2,w *. Then DISPLAYFORM0 WLOG, assume k 0 ≥ 1, otherwise we can switch w 1 and w 2.Thus, DISPLAYFORM1 DISPLAYFORM2 Similarly, if we apply the change of variables onto the second component of, we will get DISPLAYFORM3 Thus, DISPLAYFORM4 Proof. Note that when θ ∈ [0, DISPLAYFORM5 is a strict decreasing function w.r.t. θ. Note that G = k 0 + 1 > 0 and DISPLAYFORM6 Then the only thing we need to prove is DISPLAYFORM7 Since the inequality holds only when cos 2, which means k 0 = 3 and k 0 = 1, which makes a contradiction. Thus, DISPLAYFORM8 Therefore, DISPLAYFORM9, which completes the proof. Lemma D.10. F (θ) is either strictly decreasing or first decrease and then increase when θ ∈ (θ 0, DISPLAYFORM10 Proof. DISPLAYFORM11 Define DISPLAYFORM12, then H(θ)·F (θ) < 0(i.e., when H(θ) is positive, F (θ) is decreasing, otherwise F (θ) is increasing), and we know that DISPLAYFORM13 Note that holds because θ > θ 0 ≥ π 2k0. Thus, H(θ) is a strictly decreasing function when θ ∈ (θ 0, π k0]. We can see that DISPLAYFORM14 Thus, if H(DISPLAYFORM15 Otherwise, F (θ) first decrease and then increase when θ ∈ (θ 0, DISPLAYFORM16 Proof. From lemma D.10 we have already known that F (θ) is either strictly decreasing or first decrease and then increase when θ ∈ (θ 0, π k0], so the maximum of the function value on an interval can only be at the endpoints of that interval, which means that we only need to prove F (DISPLAYFORM17 Thus, h(x) is decreasing in [, π]. However, we know that h(DISPLAYFORM18 which means that F ( DISPLAYFORM19 Lemma D.12. θ 1 = θ 2 .Proof. From the proof of lemma D.8 we get DISPLAYFORM20 Thus, DISPLAYFORM21 Using lemma D.9, If z = (tz 1, z 2), ||z 1 || = ||z 2 || = 1 and w E 2D CASES WITH ASSUMPTION RELAXATION Since this section is pretty similar to B, I will try my best to make it brief and point out the most important things in the proof. DISPLAYFORM22 After the changing of variables(i.e., polar coordinates), we know that w 1 = (cos θ 1, sin θ 1) and w 2 = (cos θ 2, sin θ 2). Then when θ is in the first part to the fourth part, the function h will change to four different functions: DISPLAYFORM0 WLOG, we assume θ 1 ≤ θ 2. E.2 0 ≤ θ 1 ≤ θ 2 ≤ α First, it's easy to verify that ∀θ ∈ [0, θ], h 1 (θ) + h 1 (α − θ) = 0.Besides, h 1 (θ) = sin θ + sin(α − θ) − (π − θ) cos θ − (π − α + θ) cos(α − θ)= 2 sin α 2 cos(θ − α 2) − (π − θ) cos θ − (π − α + θ) cos(α − θ)≤ 2 sin α 2 − π 2 (cos θ + cos(α − θ)) DISPLAYFORM1 ≤ 2 sin α 2 − π cos α 2 < 0.When m(w 1) = m(w 2) = 0, we know that h 1 (θ 1)+h 1 (θ 2) = 0, and because of those two properties above, we know that θ 1 + θ 2 = α. Thus, θ 1 ∈ [0, α 2]. And we have the following lemma Lemma E.1. m(w 1) ≤ 0. m(w 1) = sin(α − 2θ 1)(π − α + 2θ 1) − (π − α + θ 1) sin(α − θ 1) + (π − θ 1) sin θ 1 ≥ sin(α − 2θ 1)(π − α + θ 1) − (π − α + θ 1) sin(α − θ 1) + (π − θ 1) sin θ 1≥ sin(α − 2θ 1)(π − α + θ 1) − (π − α + θ 1) sin(α − θ 1) + (π − α 2) sin θ 1= (π − α + θ 1)(sin(α − 2θ 1) − sin(α − θ 1)) + (π − α 2) sin θ 1≥ (π − α 2)(sin(α − 2θ 1) − sin(α − θ 1) + sin θ 1 )= (π − α 2)(sin(α − 2θ 1) − sin θ 1 − sin θ 1 cos(α − 2θ 1) − cos θ 1 sin(α − 2θ 1)) DISPLAYFORM0 Thus, the only possible critical points are m(w 1) = 0, which are 0 and α 2. After verification, we conclude that there are only two critical points in this case: (θ 1, θ 2) = (0, α) or (θ 1, θ 2) = (α 2, α 2).E.3 α ≤ θ 1 ≤ θ 2 ≤ π When m(w 1) = m(w 2) = 0, we know that h 1 (θ 1) + h 1 (θ 2) = 0. However, when θ ∈ [α, π], we know that h 2 (θ) = (π − θ + α) sin(α − θ) − (π − θ) sin θ ≤ 0.The inequality cannot become equal because the possible values of θs such that each term equals zero has no intersection. Thus, h 2 (θ) is always negative, which means that in this case there are no critical points. E.4 π ≤ θ 1 ≤ θ 2 ≤ π + α It's easy to verify that ∀θ ∈ [π, π + α], h 3 (θ) + h 3 (2π + α − θ) = 0. Furthermore, DISPLAYFORM1 > 0.Thus, from m(w 1) = m(w 2) = 0, we know that h 1 (θ 1) + h 1 (θ 2) = 0 we get θ 1 + θ 2 = 2π + α, which means that θ 1 ∈ [π, π + α 2], so we can prove the following lemma: Lemma E.2. m(w 1) ≤ 0.Proof. Let θ = θ 1 − π, then m(w 1) = (π − θ 2 + θ 1) sin(θ 1 − θ 2) + h 3 (θ 1) = (π + θ − α + θ) sin(2θ − α) + h 1 (θ) + π sin θ − π sin(α − θ) ≤ (π + 2θ − α) sin(2θ − α) + sin(α − 2θ)(π + 2θ − α) + π(sin θ − sin(α − θ)) ≤ π(sin θ − cos θ) ≤ 0.The first inequality is from lemma E.1.Thus, the only possible critical points are m(w 1) = 0, which are π and π + α 2. After verification, we conclude that there are only two critical points in this case: (θ 1, θ 2) = (π, π + α) or (θ 1, θ 2) = (π + α 2, π + α 2).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B14uJzW0b
Recovery guarantee of stochastic gradient descent with random initialization for learning a two-layer neural network with two hidden nodes, unit-norm weights, ReLU activation functions and Gaussian inputs.
Dropout is a simple yet effective technique to improve generalization performance and prevent overfitting in deep neural networks (DNNs). In this paper, we discuss three novel observations about dropout to better understand the generalization of DNNs with rectified linear unit (ReLU) activations: 1) dropout is a smoothing technique that encourages each local linear model of a DNN to be trained on data points from nearby regions; 2) a constant dropout rate can in effective neural-deactivation rates that are significantly different for layers with different fractions of activated neurons; and 3) the rescaling factor of dropout causes an inconsistency to occur between the normalization during training and testing conditions when batch normalization is also used. The above leads to three simple but nontrivial improvements to dropout ing in our proposed method "Jumpout." Jumpout samples the dropout rate using a monotone decreasing distribution (such as the right part of a truncated Gaussian), so the local linear model at each data point is trained, with high probability, to work better for data points from nearby than from more distant regions. Instead of tuning a dropout rate for each layer and applying it to all samples, jumpout moreover adaptively normalizes the dropout rate at each layer and every training sample/batch, so the effective dropout rate applied to the activated neurons are kept the same. Moreover, we rescale the outputs of jumpout for a better trade-off that keeps both the variance and mean of neurons more consistent between training and test phases, which mitigates the incompatibility between dropout and batch normalization. Compared to the original dropout, jumpout shows significantly improved performance on CIFAR10, CIFAR100, Fashion- MNIST, STL10, SVHN, ImageNet-1k, etc., while introducing negligible additional memory and computation costs. Deep learning has achieved remarkable success on a variety of machine learning tasks BID15 BID14. Deep neural networks (DNN), however, are often able to fit the training data perfectly -this can in the overfitting problem, thereby weakening the generalization performance on unseen data. Dropout BID17 BID7 is a simple yet effective technique to mitigate such problems by randomly setting the activations of hidden neurons to 0, a strategy that reduces co-adaptation amongst neurons. Dropout applies to any layer in a DNN without causing significant additional computational overhead. Dropout, however, has several drawbacks. Firstly, dropout rates, constituting extra hyper-parameters at each layer, need to be tuned to get optimal performance. Too high a dropout rate can slow the convergence rate of the model, and often hurt final performance. Too low a rate yields few or no improvements on generalization performance. Ideally, dropout rates should be tuned separately for each layer and also during various training stages. In practice, to reduce computation, we often tune a single dropout rate and keep it constant for all dropout layers and throughout the training process. If we treat dropout as a type of perturbation on each training sample, it acts to generalize the DNN to noisy samples having that specific expected amount of perturbation (due to the fixed dropout rate) with high probability. The fixed rate rules out samples typical having less perturbation, i.e., those potentially more likely to be closer to the original samples and thus that are potentially more helpful to improve generalization. Also, when a constant dropout rate is applied to layers and samples having different fractions of activated neurons, the effective dropout rate (i.e., the proportion of the activated neurons that are deactivated by dropout) varies, which might in too much perturbation for some layers and samples and too little perturbation for others. Another deficiency of dropout lies in its incompatibility with batch normalization (BN) BID8 (more empirical evidence of this is shown in Section 3.3). As dropout randomly shuts down activated neurons, it needs to rescale the undropped neurons to match the original overall activation gain of the layer. Unfortunately, such rescaling breaks the consistency of the normalization parameters required between training and test phases 1 and may cause poor behavior when used with BN. Since BN, and its variants BID0 BID18 BID20, has become an almost indispensable component of modern DNN architectures to keep the training stable and to accelerate convergence, dropout itself often gets dropped out in the choice between these two non-complementary options and has recently become less popular. We propose three simple modifications to dropout in order to overcome the drawbacks mentioned above. These modifications lead to an improved version of dropout we call "jumpout." Our approach is motivated by three observations about how dropout in improved generalization performance for DNNs with rectified linear unit (ReLU) activations, which covers a frequently used class of DNNs. Firstly, we note that any DNN with ReLU is a piecewise linear function which applies different linear models to data points from different polyhedra defined by the ReLU activation patterns. Based on this observation, applying dropout to a training sample randomly changes its ReLU activation patterns and hence the underlying polyhedral structure and corresponding linear models. This means that each linear model is trained not only to produce correct predictions for data points in its associated polyhedron, but also is trained to work for data points in nearby polyhedra; what precisely "nearby" means depends on the dropout rate used. This partially explains why dropout improves generalization performance. The problem, however, is that with a fixed dropout rate, say p, and on a layer with n units, the typical number of units dropped out is np as that is the mode of a Binomial distribution with parameter p. It is relatively rare that either very few (closer to zero) or very many (closer to n) units are dropped out. Thus, with high probability, each linear model is smoothed to work on data points in polyhedra at a typical distance away. The probability of smoothing over closer distances is potentially much smaller, thus not achieving the goal of local smoothness. In jumpout, by contrast, p rather than being fixed is itself a random variable; we sample p from a distribution that is monotone decreasing (e.g., a truncated half-Gaussian). This achieves the property that Pr(i units dropping out) ≥ Pr(i+1 units dropping out) for all i ∈ {1, 2, . . ., n}. That is, a smaller dropout rate has a higher probability of being chosen. Hence, the probability of smoothing polyhedra to other points decreases as the points move farther away. Secondly, we notice that in dropout, the fraction of activated neurons in different layers, for different samples and different training stages, can be different. Although we are using the same dropout rate, since dropping out neurons that are already quiescent by ReLU changes nothing, the effective dropout rate, i.e., the fraction of the activated neurons that are dropped, can vary significantly. In jumpout, we adaptively normalize the dropout rate for each layer and each training sample/batch, so the effective neural-deactivation rate applied to the activated neurons are consistent over different layers and different samples as training proceeds. Lastly, we address the incompatibility problem between dropout and BN by rescaling the outputs of jumpout in order to keep the variance unchanged after the process of neural deactivation. Therefore, the BN layers learned in the training phase can be directly applied in the test phase without an inconsistency, and we can reap the benefits of both dropout and BN when training a DNN.In our implementation, similar to dropout, jumpout also randomly generates a 0/1 mask over the hidden neurons to drop activations. It does not require any extra training, can be easily implemented and incorporated into existing architectures with only a minor modification to dropout code. In our experiments on a broad range of benchmark datasets including CIFAR10, CIFAR100, Fashion-MNIST, SVHN, STL10 and ImageNet-1k, jumpout shows almost the same memory and computation costs as the original dropout, but significantly and consistently outperforms dropout on a variety of tasks, as we show below. Jumpout is not the first approach to address the fixed dropout rate problem. Indeed, recent work has proposed different methods to generate adaptive dropout rates. BID1 proposed "standout" to adaptively change the dropout rates for various layers and training stages. They utilized a binary belief network and trained it together with the original network to control the dropout rates. BID24 further extend the model so that adaptive dropout rates can be learned for different neurons or group of neurons. BID23 showed that the Rademacher complexity of a DNN is bounded by a function related to the dropout rate vectors, and they proposed to adaptively change dropout rates according to the Rademacher complexity of the network. In contrast to the above methods, jumpout does not rely on additional trained models: it adjusts the dropout rate solely based on the ReLU activation patterns. Moreover, jumpout introduces negligible computation and memory overhead relative to the original dropout methods, and can be easily incorporated into existing model architectures. BID19 showed that dropout has a Gaussian approximation called Gaussian dropout and proposed to optimize the Gaussian dropout directly to achieve faster convergence. The Gaussian dropout was also extended and studied from the perspective of variational methods. BID9 generalized Gaussian dropout and proposed variational dropout, where they connected the global uncertainty with the dropout rates so that dropout rates can be adaptive for every neuron. BID11 further extended variational dropout to reduce the variance of the gradient estimator and achieved sparse dropout rates. Other recent variants of dropout include Swapout BID16, which combines dropout with random skipping connections to generalize to different neural network architectures, and Fraternal Dropout BID25, which trains two identical DNNs using different dropout masks to produce the same outputs and tries to shrink the gap between the training and test phases of dropout. In this paper, we focus on changes to the original dropout that do not require any extra training/optimization costs or introduce more parameters to learn. Jumpout involves orthogonal and synergistic contributions to most of the above methods, and targets different problems of dropout. Indeed, jumpout can be applied along with most other previous variants of dropout. We study a feed-forward deep neural networks of the form: DISPLAYFORM0 where W j is the weight matrix for layer j, ψ j is the corresponding activation function (ReLU in this paper), x ∈ X is an input data point of d in dimensions andŷ(x) is the network's output prediction of d out dimensions, e.g., the logits before applying softmax. We denote the hidden nodes on layer j to be h j, i.e., h j = W j ψ j−1 (W j−1 ψ j−2 (. . . ψ 1 (W 1 x))), whose dimensionality is d j; they represent the nodes after applying the activation function ash j = ψ(h j).The above DNN formalization can generalize many DNN architectures used in practice. Clearly, Eqn. FORMULA0 can represent a fully-connected network of m layers. Note Eqn. covers the DNNs with bias terms at each layer since the bias terms can be written in the matrix multiplication as well by introducing dummy dimensions on the input data (append m 1's to input data). Moreover, the convolution operator is essentially a matrix multiplication, where every row of the matrix corresponds to applying a convolutional filter on a certain part of the input, and therefore the ing weight matrix is very sparse and has tied parameters, and typically has an enormous (compared to input size) number of rows. The average-pooling is a linear operator and therefore representable as a matrix multiplication, and max-pooling can be treated as an activation function. Finally, we can represent the residual network block by appending an identity matrix at the bottom of a weight matrix so that we can retain the input values, and add the retained input values later through another matrix operation. Therefore, we can also write a DNN with short-cut connections in the form of Eqn..For piecewise linear activation functions such as ReLU, the DNN in Eqn. FORMULA0 can be written as a piecewise linear function, i.e., the DNN in a region surrounding a given data point x is a linear model having the following form:ŷ DISPLAYFORM1 where W x j is the equivalent weight matrix after combining the ant activation pattern with W j. For instance, suppose we use ReLU activation ψ ReLU (z) = max(0, z); at every layer, we have an activation pattern for the input a j (x) ∈ {0, 1} dj, and a j (x)[p] = 0 indicates that ReLU sets the unit p to 0 or otherwise preserves the unit value. Then, ψ ReLU (W 1 x) = W x 1 x, where W x 1 is modified from W 1 by setting the rows, whose corresponding activation patterns are 0, to be all zero vectors. We can continue such a process to the deeper layers, and in the end we can eliminate all the ReLU functions and produce a linear model as shown in Eqn..In addition, the gradient DISPLAYFORM2 ∂x is the weight vector of the linear model. Note that the linear model in Eqn. 2 is specifically associated with the activation patterns {a j (x)} m j=1 on all layers for a data input x, which is equal to a set of linear constraints that defines a convex polyhedron containing Although the above analysis can be easily extended to DNNs with general piecewise linear activation functions, we focus on DNNs with ReLU activations in the rest of the paper for clarity. In addition to the piecewise linear property, ReLU units are cheap to compute, as is their gradient, and are widely applicable to many different tasks while achieving good performance BID5 BID22. In the following, we will study how dropout improves the generalization performance of a complicated DNN by considering how it generalizes each local linear model to its nearby convex polyhedra. This is easier to analyze and acts as the inspiration for our modifications to the original dropout. We will further elaborate the understandings of dropout based on the above insights of local linear models in the next section.3 THREE MODIFICATIONS TO DROPOUT LEAD TO JUMPOUT There have been multiple explanations for how dropout improves the performance of DNNs. Firstly, dropout prevents the co-adaptation of the neurons in the network, or in other words, encourages the independence or diversity amongst the neurons. Secondly, by randomly dropping a portion of neurons during training, we effectively train a large number of smaller networks, and during test/inference, the network prediction can be treated as an ensemble of the outputs from those smaller networks, and thus enjoys the advantages of using an ensemble such as variance reduction. Here we provide another perspective for understanding how dropout improves generalization performance by inspecting how it smooths each local linear model described in the previous section. As mentioned above, for a DNN with ReLUs, the input space is divided into convex polyhedra, and for any data point in every convex polyhedron of the final layer (a polyhedron that is not divided further into smaller regions), the DNN behaves exactly as a linear model. For large DNNs with thousands of neurons per layer, the number of convex polyhedra can be exponential in the number of neurons. Hence, there is a high chance that the training samples will be dispersedly situated amongst the different polyhedra, and every training data point is likely to be given its own distinct local linear model. Moreover, it is possible that two nearby polyhedra may correspond to arbitrarily different linear models, since they are the of consecutively multiplying a series of weight matrices FORMULA1 ), where each weight matrix W x j is W j with some rows setting to be all-zero according to the activation pattern a j (x) of a specific data point x. If the activation patterns of two polyhedra differ on some critical rows of the weight matrices, the ing linear models may differ a lot. Therefore, it is possible that the linear model of one polyhedron can only work for one or a few training data points strictly within the polyhedron, and may fail when applied to any nearby test data point (i.e., a lack of smoothness). This might make DNN fragile and perform unstably on new data, and thus weaken its generalization ability. DISPLAYFORM0 Given the problems of dropout mentioned in Section 1.1, we propose to sample a dropout rate from a truncated half-normal distribution (to get a positive value), which is the positive part of an ordinary Gaussian distribution with mean zero. In particular, we firstly sample p ∼ N (0, σ) from a Gaussian distribution, and then take the absolute value |p| as the dropout rate. We further truncate |p| so that |p| ∈ [p min, p max], where 0 ≤ p min < p max ≤ 1. These determine the lower and upper limits of the dropout rate and are used to ensure that the sampled probability does not get either too small, which makes jumpout ineffective, or too large, which may yield poor performance. Overall, this achieves a monotone decreasing probability of a given dropout rate as mentioned above. Other distributions (such as a Beta distribution) could also be used for this purpose, but we leave that to future work. We utilize the standard deviation σ as the hyper-parameter to control the amount of generalization enforcement. By using the above method, smaller dropout rates are sampled with higher probabilities so that a training sample will be more likely to contribute to the linear models of closer polyhedra. Therefore, such a Gaussian-based dropout rate distribution encourages the smoothness of the generalization performance of each local linear model, i.e., it will still perform well on points in closer polyhedra, but its effectiveness will diminish for a point farther away from the polyhedron it belongs to. The dropout rate for each layer is a hyper-parameter, and as stated above, it controls a form of smoothness amongst nearby local linear models. Ideally, the dropout rates of different layers should be tuned separately to improve network performance. In practice, it is computationally expensive or infeasible to tune so many hyper-parameters. One widely adopted approach is therefore to set the same drop rate for all layers and to tune one global dropout rate. Using a single global dropout rate is suboptimal because the proportion of active neurons (i.e., neurons with positive values) of each layer at each training stage and for each sample can be dramatically different (see FIG1). When applying the same dropout rate to different layers, different fractions of active neurons get deactivated, so the effective dropout rate applied to the active neurons varies significantly. Suppose the fraction of active neurons in layer j is q DISPLAYFORM0 Since dropping the inactive neurons has no effects (neurons with values ≤ 0 have already been set to 0 by ReLU), the effective dropout rate of every layer is p j q + j, where p j is the dropout rate of layer j. Thus, to better control the behavior of dropout for different layers and across various training stages, we normalize the dropout rate by q + j and use an actual dropout rate of p j = p j /q + j. By doing so, the hamming distance between the changed activation pattern and the original pattern is more consistent, and we can more precisely achieve the desirable level of smoothing encouragement by tuning the dropout rate as a single hyper-parameter. In standard dropout, if the dropout rate is p, we scale the neurons by 1/p during training and keeps the neuron values unchanged during the test/inference phase. The scaling factor 1/p keeps the mean of the neurons the same between training and test; this constitutes a primary reason for the incompatibility between dropout and batch normalization (BN) BID8. Specifically, though the mean of neurons is consistent, the variance can be dramatically different between the training and test phases, in which case the DNN might have unpredictable behavior as the BN layers cannot adapt to the change of variance from training to test condition. We consider one possible setting of combining dropout layers with BN layers where one linear computational layer (e.g., a fully-connected or a convolutional layer without activation function) is followed by a BN layer, then a ReLU activation layer, and then followed by a dropout layer. For layer j, without loss of generality, we may treat the value of a neuron i after ReLU, i.e.,h j [i] as a random variable with q + j probability of being 1 and 1 − q + j probability of being 0. If dropout is not applied,h j [i] then gets multiplied by certain entry in the weight matrix W j+1 [i, i], and contributes to the value of the i neuron of layer j +1. Since we consider any index i and i, we rename the following terms for simplicity: x j:=h j, w j:= W j+1 [i, i], y j:= h j+1 [i]. As neuron i of layer j + 1 (before ReLU) then gets fed into a BN layer, we will focus on the change of mean and variance as we add the dropout layer. Suppose we apply a dropout rate of p j, then DISPLAYFORM0 Hence, dropout changes both the scales of the mean and variance of neurons during training. Since the following BN's parameters are trained based on the scaled mean and variance, which however are not scaled by dropout during test/inference (because dropout is not used during testing), the trained BN is not consistent with the test phase. An easy fix of the inconsistency is to rescale the output y j to counteract dropout's on the scales of mean and variance. In order to recover the original scale of the mean, we should rescale the dropped neurons by (1 − p j) −1. However, the rescaling factor should be (1 − p j) −0.5 instead for recovering the scale of the variance if E(y j) is small and thus the second term of the variance can be ignored. Ideally, we can also take into account the value of E[w j], and scale the un-dropped nuerons by DISPLAYFORM1 However, computing information about w j, which is the weight matrix of the following layer, requires additional computation and memory cost. In addition, such a scaling factor is only correct for the variance of y j. To make the mean consistent, we should instead use (1 − p j) −1 (the original dropout scaling factor). No simple scaling method can resolve the shift in both mean and variance, as the mean rescaling (1 − p j) −1 does not solve the variance shift. −1, (1 − p) −0.5 and (1 − p) −0.75 as the dropout rescaling factor applied to y, when p = 0.2. The network is "CIFAR10(s)" (see Sec. 4). The left plot shows the empirical mean of y with dropout divided by the case without dropout (averaged over all layers), and the second plot shows the similar ratio for the variance. Ideally, both ratios should be close to 1. As shown in the plots, (1 − p) −0.75 gives nice trade-offs between the mean and variance rescaling. When the mean E(y j) is large in magnitude, so that the second term in the variance is comparable with the first term, in which case the variance is small, we should use the rescaling factor close to (1−p j) −1, which makes the mean exactly unchanged for training and test. In contrast, when the mean E(y j) is small in magnitude and close to 0, the second term in the variance is ignorable, and we should use (1 − p j) −0.5 as the rescaling factor, to make the variance unchanged. In practice, it is not efficient to compute E(y j) during training, so we propose to use a trade-off point (1 − p j) −0.75 between (1−p j) −1 and (1−p j) −0.5. In FIG4, we show that (1−p j) −0.75 makes both the mean and variance sufficiently consistent for the cases of using dropout and not using dropout. In FIG3, we compare the performance of the original dropout and dropout using our rescaling factor (1−p j) −0.75, when they DISPLAYFORM2 // Compute the fraction of activated neurons DISPLAYFORM3 // Sample a Gaussian dropout rate DISPLAYFORM4 // Normalize the dropout rate according to the fraction of activated neurons 4 Randomly generate a 0/1 mask z j for h j, with probability p j to be 0; // Sample the dropout mask 5 s j:= (1 − p) −0.75; // Compute the rescaling factor DISPLAYFORM5 // Rescale the outputs 7 return h j Algorithm 1: Jumpout layer for DNN with ReLU.are used with and without BN in a convolutional networks. It shows that using dropout with BN can potentially improve the performance, and larger dropout might in more improvement. However, using the original dropout with BN leads to a significant decrease in the accuracy once increasing the dropout rate over 0.15. In contrast, the performance of dropout using our rescaling keeps improving with increasing dropout rate (until reaching 0.25), and is the best among the four configurations. We combine the three modifications specifically designed to overcome the drawbacks of the original dropout in our proposed improved dropout, which we call "Jumpout" as shown in Alg. 1. Similar to the original dropout, jumpout essentially generates a 0/1 mask for the input neurons, and randomly drop a portion of the neurons based on the mask. Summarizing the novelty of jumpout, instead of using a fixed dropout rate as in the original dropout, jumpout samples from a monotone decreasing distribution as mentioned above to get a random dropout rate. Also, jumpout normalizes the dropout rate adaptively based on the number of active neurons, which enforces consistent regularization and generalization effects on different layers, across different training stages, and on different samples. Finally, jumpout further scales the outputs by (1 − p) −0.75, as opposed to (1 − p) −1 during training, in order to trade-off the mean and variance shifts and synergize well with batchnorm operations. Jumpout requires one main hyper-parameter σ to control the standard deviation of the half-normal distribution, and two auxiliary truncation hyperparameters (p min, p max). Though (p min, p max) can also be tunned, they serve to bound the samples from the half-normal distribution; in practice, we set p min = 0.01 and p max = 0.6, which work consistently well over all datasets and models we tried. Hence, jumpout has three hyperparameters, although we only tuned σ and achieved good performance, as can be seen below. Also, note that here we consider the input h j to be the features of layer j corresponding to one data point. For a mini-batch of data points, we can either estimate q + j separately for each single data point in the mini-batch or apply the average q + j over data points as the estimate for the mini-batch. In practice, we utilize the latter option as we find that it gives comparable performance to the first while using less computation and memory. Jumpout has almost the same memory cost as the original dropout, which is the additional 0/1 drop mask. For computation, jumpout requires counting the number of active neurons, which is insignificant compared to the other layers of a deep model, and sampling from the distribution, which is also insignificant compared to the other computation in DNN training. In this section, we apply dropout and jumpout to different popular DNN architectures and compare their performance on six benchmark datasets at different scales. In particular, these DNN architectures include a small CNN with four convolutional layers 2 applied to CIFAR10 BID10 ), WideResNet-28-10 (applied to CIFAR10 and CIFAR100 BID10), "pre-activation" version of ResNet-20 BID6 applied to Fashion-MNIST ("Fashion" in all tables) BID21, WideResNet-16-8 applied to SVHN BID12 and STL10, and ResNet-18 BID5 applied to ImageNet BID3 BID15. The information about the all the datasets can be found in TAB3 at Appendix. For all the experiments about CIFAR and Fashion-MNIST, we follow the standard settings, data preprocessing/augmentation, and hyperparameters used in an existing GitHub repository 3. On ImageNet, we starts from a pre-trained ResNet18 model 4, and train two copies of it with dropout and jumpout respectively for the same number of epochs. The reason for not starting from random initialized model weights is that training DNNs on ImageNet usually does not have overfitting problem if one follows the standard data augmentation methods used to train most modern models, but both dropout and jumpout are most effective in the case of overfitting. Therefore, we choose to start from the pre-trained model, on which training accuracy is relatively high (but still not overfit and very close to the test accuracy) 5. We summarize the experimental in TAB1 which shows that jumpout consistently outperforms dropout on all datasets and all the DNNs we tested. Moreover, for Fashion-MNIST and CIFAR10 on which the test accuracy is already > 95%, jumpout can still bring appreciable improvements. In addition, on CIFAR100 and ImageNet (on which a great number of DNNs and training methods are heavily tunned), jumpout achieves the improvement that can only be obtained by significantly increasing the model size in the past. These verify the effectiveness of jumpout and its advantage comparing to the original dropout. In addition, we conduct a thorough ablation study of all the possible combinations of the three proposed modifications, with reported in TAB0. It further verifies the effectiveness of each modification: 1) each modification improves the vanilla dropout; 2) adding any modification to another brings further improvements; and 3) applying the three modifications together (i.e., jumpout) achieves the best performance. We also provide the learning curves and convergence plots of dropout and jumpout equipped DNNs during training in FIG5. In all the figures, "Jumpout" applies adaptive dropout rate per minibatch. Jumpout exhibits substantial advantages over dropout in early learning stages, and reaches a reasonably good accuracy much faster. In the future, it may be possible to find a better learning rate schedule method specifically for jumpout, so it can reach the final performance earlier than dropout. −1, (1 − p) −0.5 and (1 − p) −0.75 as the dropout rescaling factor applied to y, when p = 0.1. The network is "CIFAR10(s)" (see Sec. 4). The left plot shows the empirical mean of y with dropout divided by the case without dropout (averaged over all layers), and the second plot shows the similar ratio for the variance. Ideally, both ratios should be close to 1. As shown in the plots, (1 − p) −0.75 gives nice trade-offs between the mean and variance rescaling.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1gRCiA5Ym
Jumpout applies three simple yet effective modifications to dropout, based on novel understandings about the generalization performance of DNN with ReLU in local regions.
Concerns about interpretability, computational resources, and principled inductive priors have motivated efforts to engineer sparse neural models for NLP tasks. If sparsity is important for NLP, might well-trained neural models naturally become roughly sparse? Using the Taxi-Euclidean norm to measure sparsity, we find that frequent input words are associated with concentrated or sparse activations, while frequent target words are associated with dispersed activations but concentrated gradients. We find that gradients associated with function words are more concentrated than the gradients of content words, even controlling for word frequency. Researchers in NLP have long relied on engineering features to reflect the sparse structures underlying language. Modern deep learning methods promised to relegate this practice to history, but have not eliminated the interest in sparse modeling for NLP. Along with concerns about computational resources BID0 BID12 and interpretability BID10 BID21, human intuitions continue to motivate sparse representations of language. For example, some work applies assumptions of sparsity to model latent hard categories such as syntactic dependencies BID14 or phonemes BID1. BID13 found that a sparse attention mechanism outperformed dense methods on some NLP tasks; BID11 found sparsified versions of LMs that outperform dense originals. Attempts to engineer sparsity rest on an unstated assumption that it doesn't arise naturally when neural models are learned. Is this true?Using a simple measure of sparsity, we analyze how it arises in different layers of a neural language model in relation to word frequency. We show that the sparsity of a word representation increases with exposure to that word during training. We also find evidence of syntactic learning: gradient updates in backpropagation depend on whether a word's part of speech is open or closed class, even controlling for word frequency. Language model. Our LM is trained on a corpus of tokenized, lowercased English Wikipedia (70/10/20 train/dev/test split). To reduce the number of unique words (mostly names) in the corpus, we excluded any sentence with a word which appears fewer than 100 times. Those words which still appear fewer than 100 times after this filter are replaced with <UNK>. The ing training set is over 227 million tokens of around 19.5K types. We use a standard 2-layer LSTM LM trained with cross entropy loss for 50 epochs. The pipeline from input x t−1 at time step t − 1 to predicted output distributionx for time t is described in Figure 1, illustrating intermediate activations h e t, h 1 t, and h 2 t. At training time, the network observes x t and backpropagates the gradient updates h e t,h 1 t,h 2 t, andx t. The embeddings produced by the encoding layer are 200 units, and the recurrent layers have 200 hidden units each. The batch size is set to forty, the maximum sequence length to 35, and the dropout ratio to 0.2. The optimizer is standard SGD with clipped gradients at 2 = 0.25, where the learning rate begins at 20 and is quartered whenever loss fails to improve. Measuring sparsity. We measure the sparsity of a vector v using the reciprocal of the TaxicabEuclidean norm ratio BID17. This measurement has a long history as a measurement of sparsity in natural settings (Zibulevsky Figure 2: Average sparsity χ(h 2 t) over all training epochs (x-axis), for target words x t occurring more than 100k times in training. Target words are sorted from most frequent (bottom) to least frequent (top).and BID23 BID4 BID15 BID22 and is formally defined as χ(v) = v 2 / v 1. The relationship between sparsity and this ratio is illustrated in two dimensions in the image on the right, in which darker blue regions are more concentrated. The pink circle shows the area where 2 ≤ 1 while the yellow diamond depicts 1 ≤ 1. For sparse vectors 1, 0 or 0, 1, the norms are identical so χ is 1, its maximum. For a uniform vector like 1, 1, χ is at its smallest. In general, χ(v) is higher when most elements of v are close to 0; and lower when the elements are all similar in value. Sparsity is closely related to the behavior of a model: If only a few units hold most of the mass of a representation, the activation vector will be highly concentrated. If a neural network relies heavily on a small number of units in determining its predictions, the gradient will be highly concentrated. A highly concentrated gradient is mainly modifying a few specific pathways. For example, it might modify a neuron associated with particular inputs like parentheses BID5, or properties like sentiment BID16.Representations of Target Words. Our first experiments look at the relationship of sparsity to target word x t. Gradient updates triggered by the target are often used to identify units that are relevant to a prediction, and as shown in Figure 2, gradient sparsity increases with both the frequency of a word in the corpus and the overall training time. In other words, more exposure leads to sparser relevance. Because the sparsity ofh 2 increases with target word frequency, we measure not sparsity itself but the Pearson correlation, over all words w, between word frequency and mean χ(h) over representations h where w is the target: DISPLAYFORM0 Here FIG0 we confirm that concentrated gradients are not a of concentrated activations, as activation sparsity χ(h 2) is not correlated with target word frequency. The correlation is strong and increasing only for ρ ← (h 2). The sparse structure being applied is therefore particular to the gradient passed from the softmax to the top LSTM layer, related to how a word interacts with its context. The Role of Part of Speech. FIG1 shows that ρ ← (h 2) follows distinctly different trends for open POS classes 1 and closed classes 2. To associate words to POS, we tagged our training corpus with spacy 3; we associate a word to a POS only if the majority (at least 100) of its occurrences are tagged with that POS. We see that initially, frequent words from closed classes are highly concentrated, but soon stabilize, while frequent words from open classes continue to become more concentrated throughout training. Why?Closed class words clearly signal POS. But open classes contain many ambiguous words, like "report", which can be a noun or verb. Open classes also contain many more words in general. We posit that early in training, closed classes reliably signal syntactic structure, and are essential for shaping network structure. But open classes are essential for predicting specific words, so their importance in training continues to increase after part of speech tags are effectively learned. The high sparsity of function word gradient may be surprising when compared with findings that content words have a greater influence on outputs BID7. However, those findings were based on the impact on the vector representation of an entire sentence after omitting the word. BID6 found that content words have a longer window during which they are relevant, which may explain the of BID7. Neither of these studies controlled for word frequency in their analyses contrasting content and function words, but we believe this oversight is alleviated in our work by measuring correlations rather than raw magnitude. Because ρ ← (h 2) is higher when evaluated over more fre- Representations of Input Words. We next looked at the vector representations of each step in the word sequence as a representation of the input word x t−1 that produced that step. We measure the correlation with input word frequency: DISPLAYFORM1 Here FIG0 we find that the view across training sheds some light on the learning process. While the lower recurrent layer quickly learns sparse representations of common input words, ρ → (h 1) increases more slowly later in training and is eventually surpassed by ρ → (h e), while gradient sparsity never becomes significantly correlated with word frequency. studied the activations of feedforward networks in terms of the importance of individual units by erasing a particular dimension and measuring the difference in log likelihood of the target class. They found that importance is concentrated into a small number of units at the lowest layers in a neural network, and is more dispersed at higher layers. Our findings suggest that this effect may be a natural of the sparsity of the activations at lower layers. We relate the trajectory over training to the Information Bottleneck Hypothesis of BID20. This theory, connected to language model training by BID19, proposes that the earlier stages of training are dedicated to learning to effectively represent inputs, while later in training these representations are compressed and the optimizer removes input information extraneous to the task of predicting outputs. If extraneous information is encoded in specific units, this compression would lead to the observed effect, in which the first time the optimizer rescales the step size, it begins an upward trend in ρ → as extraneous units are mitigated. Why do common target words have such concentrated gradients with respect to the final LSTM layer? A tempting explanation is that the amount of information we have about common words offers high confidence and stabilizes most of the weights, leading to generally smaller gradients. If this were true, the denominator of sparsity, gradient 1, should be strongly anti-correlated with word frequency. In fact, it is only ever slightly anti-correlated (correlation > −.1). Furthermore, the sparsity of the softmax gradient χ(x) does not exhibit the strong correlation seen in χ(h 2), so sparsity at the LSTM gradient is not a direct effect of sparse logits. However, the model could still be "high confidence" in terms of how it assigns blame for error during common events, even if it is barely more confident overall in its predictions. According to this hypothesis, a few specialized neurons might be responsible for the handling of such words. Perhaps common words play a prototyping role that defines clusters of other words, and therefore have a larger impact on these clusters by acting as attractors within the representation space early on. Such a process would be similar to how humans acquire language by learning to use words like'dog' before similar but less prototypical words like'canine' BID18. As a possible mechanism for prototyping with individual units, BID2 found that some neurons in a translation system specialized in particular word forms, such as verb inflection or comparative and superlative adjectives. For example, a common comparative adjective like'better' might be used as a reliable signal to shape the handling of comparatives by triggering specialized units, while rarer words have representations that are more distributed according to a small collection of specific contexts. There may also be some other reason that common words interact more with specific substructures within the network. For example, it could be related to the use of context. Because rare words use more context than common words and content words use more context than function words BID6, the gradient associated with a common word would be focused on interactions with the most recent words. This would lead common word gradients to be more concentrated. It is possible that frequent words have sparse activations because frequency is learned as a feature and thus is counted by a few dimensions of proportional magnitude, as posited by. Understanding where natural sparsity emerges in dense networks could be a useful guide in deciding which layers we can apply sparsity constraints to without affecting model performance, for the purpose of interpretability or efficiency. It might also explain why certain techniques are effective: for example, in some applications, summing representations together works quite well BID3. We hypothesize that this occurs when the summed representations are sparse so there is often little overlap. Understanding sparsity could help identify cases where such simple ensembling approaches are likely to be effective. Future work may develop ways of manipulating the training regime, as in curriculum learning, to accelerate the concentration of common words or incorporating concentration into the training objective as a regularizer. We would also like to see how sparsity emerges in models designed for specific end tasks, and to see whether concentration is a useful measure for the information compression predicted by the Information Bottleneck.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1ets1h56E
We study the natural emergence of sparsity in the activations and gradients for some layers of a dense LSTM language model, over the course of training.
The integration of a Knowledge Base (KB) into a neural dialogue agent is one of the key challenges in Conversational AI. Memory networks has proven to be effective to encode KB information into an external memory to thus generate more fluent and informed responses. Unfortunately, such memory becomes full of latent representations during training, so the most common strategy is to overwrite old memory entries randomly. In this paper, we question this approach and provide experimental evidence showing that conventional memory networks generate many redundant latent vectors ing in overfitting and the need for larger memories. We introduce memory dropout as an automatic technique that encourages diversity in the latent space by 1) Aging redundant memories to increase their probability of being overwritten during training 2) Sampling new memories that summarize the knowledge acquired by redundant memories. This technique allows us to incorporate Knowledge Bases to achieve state-of-the-art dialogue generation in the Stanford Multi-Turn Dialogue dataset. Considering the same architecture, its use provides an improvement of +2.2 BLEU points for the automatic generation of responses and an increase of +8.1% in the recognition of named entities. Given the large amount of dialogue data recorded in human-human or human-chatbot interactions, there is a great need for dialogue systems that infer automatic responses grounded to personal knowledge bases. This approach has the advantage of integrating semantic information that is fundamental to achieve dialogue understanding. We want to leverage the contextual information present in a KB (e.g., a calendar of events) to answer queries like What time is my dentist appointment?. This task is challenging because existing neural dialogue agents often assume that the dialogue history carries the information needed to provide an answer but struggle to interface with the structured data stored in a KB. This assumption prevents to have an end-to-end differentiable model to maintain the kind of contextual conversations that people desire. Memory networks has proven to be effective to encode KB information into an external memory to generate more fluent and informed responses. However, there is no much work in regularizing the latent representations stored in the external memory. Unlike the conventional dropout technique used to regularize deep neural networks , we propose memory dropout to attain the same goal (i.e., reduction of overfitting) but with different functionality and designed for memory networks. Given the long-term nature of memory networks, we do not immediately remove redundant memories with some probability as in the original dropout algorithm. Instead, we assign them the current maximum age increasing their probability of being overwritten by more recent latent representations in future training steps. Thus, in contrast to , our memory dropout is a delayed regularization mechanism. The main contributions of our work are the following: • We introduce a new regularization method called memory dropout designed for dealing with overfitting in Memory Augmented Neural Networks. To our best knowledge, ours is the first work on regularizing memory networks. • We build a neural dialogue agent that uses memory dropout to incorporate KB into an external memory for automatic response generation. Our show that this technique can generate more fluent and accurate responses: an improvement of +2.2 BLUE points and +8.1% Entity F1 score versus not using it in the Stanford Multi-Turn Dialogue dataset. Figure 1: Learning the (h, y) pair transitions the neighborhood of h (represented as an ellipse) to a new state in which a memory h is drawn as the distribution of positive memories. Small circles represent the uncertainty of using a particular memory to model h. In the new memory configuration, we age positive keys (now faded in grey) making them more likely of being overwritten by other training examples. This section describes the memory dropout neural model whose goal is to increase the diversity of latent representations stored in an external memory. For example, consider a neural encoder which receives an observation (x, y) and generates the latent representation h in a hidden layer. As illustrated in Figure 1, we want to incorporate a normalized h (i.e., h = 1) in the long-term memory M. The most similar memory entries to h form a neighborhood in which entries can be positive (+) or negative (-) depending on whether they share the same class label y or not. An external memory M augments the capacity of a neural encoder by preserving long-term latent representations. Figure 2 illustrates a memory network which consists of arrays K and V to store keys (latent representations) and values (class labels), respectively, as introduced in. To support our technique, we extend this definition with arrays A and S to store the age and the variance of each key, respectively. The final form of the memory module is as follows: Our main goal is to learn a mathematical space in which the margin between positive and negative memories is maximum while retaining the minimum number of positive keys. Core to this idea is the definition of a differentiable Gaussian Mixture Model parameterized by both the location and covariance matrix of each positive memory. Sampling from this distribution returns a new positive embedding h that characterizes its neighbors. So, given the embedding vector h, the collection of P positive keys } is a subpopulation of the memory keys that can be represented as a linear superposition of P Gaussian components aimed at providing a rich class of a density model in the form where each Gaussian is centered at a positive key µ p = k + p with covariance matrix Σ p = diag(s + p). Note that without the variances stored in the array S, the uncertainty of each key will be the same and extreme embedding vectors will dominate the likelihood probability. We shall view π = {π 1, ..., π P} in Equation 1 as a vector of probabilities that quantifies the mixing coefficients of the Gaussian components and whose values correspond to the similarity between h and the positive keys K Figure 2: A memory network consists of a neural encoder and an external memory that augments its capacity by preserving longer distinct versions of h during training. Some memory entries (in gray) are positive candidates to correctly answer to the embedding h. The conditional distribution of a new key k given a particular Gaussian is so sampling an index i over the P mixture components under the distribution π generates the new key k as a random variable from p(k |π p=i). As being sampled from the mixture model, k is representative of the subpopulation of positive memories. Then, we incorporate to the external memory the information encoded by the latent vector h to the new key k, reset its corresponding age, and compute its variance to account for the observed uncertainty in approximating h with k. Finally, the aging mechanism penalizes redundant keys indexed by i + as follows, We study the memory dropout neural model in a realistic scenario: As the external memory of a dialogue system that infers automatic responses grounded to a Knowledge Base (KB). Our goal is to leverage the contextual information present in a KB (e.g., a calendar of events) to answer queries like What time is my dentist appointment?. This task is challenging because existing neural dialogue agents often assume that the dialogue history carries the information needed to provide an answer but struggle to interface with the structured data stored in a KB. This assumption prevents to have an end-to-end differentiable model to maintain the kind of flexible conversations that people desire. Figure 3 illustrates our proposed architecture formed by a Sequence-to-Sequence model to represent the dialogue history and a Memory Augmented Neural Network (MANN) to encode the KB. To encode the KB, the addressable memory entries of a Memory Network allows us to generalize with fewer latent representations of the KB, even if they were present once during training,. Inspired by , we store the KB of a given dialogue by decomposing it from its tabular format to a collection of triplets that expresses the following relationship (subject, relation, object). For example, an entry in the KB representing a dentist appointment (event=dentist, date=the 19th, time=5pm, party=Mike) would be normalized into 12 different triplets: [(dentist, date, the 19th), (dentist, time, 5pm), (dentist, party, Mike), (Mike, time, 5pm), (dentist, date, the 19th), (dentist, time, 5pm), (dentist, party, Mike), (the 19th, event, dentist), (the 19th, party, Mike), (the 19th, time, 5pm), (5pm, event, dentist), (5pm, date, the 19th), (5pm, party, Mike), (Mike, event, dentist), (Mike, date, the 19th), (Mike, time, 5pm)]. Figure 3: Architecture of the neural dialogue model that incorporates a KB. Note the i th decoding step of the wordŷ i given the attention over the external memory which encodes KB triplets in its keys and that uses memory dropout for model regularization. Each triplet feeds the MANN with the following key-value format where φ emb is a trainable embedding function that maps input tokens to a fixed-dimensional vector. To encode the dialogue, we use an encoder-decoder network architecture with an LSTM encoder that outputs a context-sensitive hidden representation h enco based on the dialogue history and an LSTM decoder that generates the next responseŷ. At every timestep of decoding, the decoder predicts the i th token of the responseŷ by computing its corresponding hidden state h deco i applying the recurrence h. Instead of directly using the decoder output h deco to predict over the dialogue vocabulary as in , we combine h deco with the of querying the memory module. We compute this vector, h KB i, as the additive attention score between h deco and each key. This operation into the unnormalized probabilities of including its corresponding value in the prediction. More formally, where W 1, W 2, W vocab_dlg, and W vocab_KB are trainable parameters. Here, o i represents the concatenation of an extended vocabulary T formed by the logits over the dialogue tokens and the logits over the distinct values stored in the value array V. A Softmax induces a probability distribution over the extended vocabulary and we argmax to obtain the index of the predicted token. Naturally, the objective function is to minimize the cross entropy between the actual and generated responses: where N is the number of dialogues, L j is the number of turns in the j th dialogue, m is the length of the generated response, and y j,k i is the one-hot encoding of the i th word in the actual response. We evaluate our proposed method in the Stanford Multi-Turn Dialogue (SMTD) dataset. This dataset consists of 3,031 dialogues in the domain of an in-car assistant, which provides automatic responses grounded to a personalized KB, known only to the in-car assistant. The entries of the KB may contain information for satisfying a query formulated by the driver. There are three types of KBs: a schedule of events, the weekly weather forecast, and information for point-of-interest navigation. Table 1: Averaged and per-domain BLEU and Entity F1 scores for the SMTD dataset. We compare our approach Memory Augmented Neural Network with Memory Dropout (MANN+MD) with the following baseline models: • Seq2Seq+: An encoder-decoder architecture that maps between sequences with minimal assumptions on the sequence structure and attends to parts of the input to predict the target word. This approach only incorporates information from the dialogue history. • Key-Value Retrieval Network+ (KVRN): A memory network with not update memory operations, it also computes attention over the keys of each entry in the KB, and • Memory Augmented Neural Network (MANN): Our proposed model with no memory dropout mechanism. For all the experiments, we use a word embedding of size 256. The encoders and decoders are bidirectional LSTMs of 3-layers with state size of 256 for each direction. For the memory network models, the number of memory entries is 1,000. We train all models with Adam optimizer with a learning rate of 0.001 and initialized all weights from a uniform distribution in [−0.01, 0.01]. We also applied dropout with keep probability of 95.0% for input and output of recurrent neural networks. Finally, we split the dataset into partitions with ratios 0.8, 0.1, and 0.1 for generating the training, validation, and testing datasets, respectively. Evaluating dialogue systems is challenging because a trained model can generate free-form responses. We employ two metrics to quantify the performance of a model grounded to a knowledge base. • is a metric of fluency that looks at n-gram precisions for n = 1, 2, 3, 4 comparing between exact matches of words between responses provided by a model and human annotators. • Entity F1 measures the correct retrieval of entities expected to be present in a generated response. We found that memory dropout improves both dialogue fluency and accuracy in recognizing entities compared to other memory networks that did not use memory dropout. Table 1 reports averaged and per-domain on this dataset. Not attending to the KB seems to have an adverse in generating automatic responses. For example, the Seq2Seq+Attention model shows the lowest Entity F1 scores (30.0%), indicating its inability to infer responses while being agnostic to the KB. The memory network MANN attends to the KB to predict a response and achieves a BLEU score of 11.2 and an Entity F1 score of 50.3%. With memory dropout, the MANN+MD increases both BLEU and Entity F1 scores to 13.4 and 58.4%, respectively. Note that both MANN and MANN+MD share the same architecture and hyperparameters and only differ in the use of memory dropout. On the other hand, KVRN also attends to the KB and it is the best performing neural network that does not use memory dropout. This method achieves a BLEU score of 13.2 and Entity F1 score of 48.0%. Our approach outperforms KVRN by +10.4% in the Entity F1 score and provides slight positive advantage for the BLEU score +0.2 setting a new SOTA for this dataset. Interestingly, KVRN only outperforms the other models in the Scheduling Entity F1 domain with 62.9%. This can be because half of these dialogues are commands to execute a task and thus do not require of a KB (e.g., Set a meeting reminder at 3pm). The explicit penalization of redundant keys during training could explain the gains obtained by MANN+MD. In order to test this hypothesis, we study now the correlation of keys in Section 4.4. We found that keys in memory tend to become redundant as training progresses. To observe this effect, for each training step we compute the Pearson correlation between each pair of keys and average these values. Figure 4 compare the degree of linear correlations in the memory networks studied in this paper. Comparing the , we can see that initially all models show low correlations as keys are randomly initialized. In following steps, both MANN and KVRN show increasing correlation values indicating that more redundant keys are stored in memory over time. In contrast, MANN+MD shows low correlation values which do not increase at the same rate than the other methods and reach stable values around step 25,000. We conclude that using memory dropout explicitly encourages the overwriting of redundant keys which lead to diverse representations in the latent space. In order to test the advantage of using memory dropout for overfitting reduction, 1) We compare the Entity F1 scores for the MANN and MANN+MD models during training considering different neighborhood sizes, and 2) We disable traditional dropout for the inputs and outputs of the encoder and decoder in an attempt to isolate the contribution of memory dropout. Figure 5 shows two groups of behavior in each plot. During training, no using memory dropout (MANN) leads to obtain higher Entity F1 scores, a reasonable outcome considering that not regularizer is present and the external memory increases memory capacity. During testing, MANN shows lower Entity F1 scores. This is a sign of a model that overfits to its training dataset and have problems to generalize in the testing dataset. On the other hand, using memory dropout (MANN+MD) provides a more conservative performance during training but better Entity F1 scores during testing ing into an average improvement of 10%. Testing with different neighborhood sizes (from 5 to 15 elements) shows in general the same behavior: two groups of well-defined Entity F1 scores grouped by whether they use the memory dropout technique or not. To test the usefulness of large memories when encoding a KB with memory dropout, we compare the models that use an external memory and consider different memory sizes. The task is to compute the Entity F1 score during the automatic response generation. We found that a side-effect of using an external memory is the need for larger memories to accommodate the large number of redundant activations generated during training. As seen in the experiments of Section 4.4 (memory correlations) and Section 4.6 (memory size), using memory dropout leads to store diverse keys and therefore we can use smaller memories to obtain higher levels of accuracy in this dataset. Deep Neural Networks are models to solve classification tasks that involve non-linearly separable classes. Memory networks consider an external differentiable memory in which a neural encoder manages the hidden states of the model using attention to address similar content in memory. Some similar approaches have also studied the problem of few-shot learning as a way to remember infrequent patters, a problem that we also observed when training with small knowledge bases. Some representative examples include which also uses an external memory to extend the capacity of the entire architecture. Also, Neural Turing Machines (NTM)? are differentiable architectures allowing efficient training with gradient descend and showing important properties for associative recall for learning different sequential patterns. In this paper, we extend the key-value architecture introduced in because of its simplicity and also it has shown to be effective in learning small datasets in text and visual domains. Deep models have also been used to train in dialogue agents. Often they model the dialogue state considering belief tracking and generation components. More recent architectures consider the use of a knowledge base and an external memory to encode its content. Although the key-value architecture of this system allows for incorporating domainspecific knowledge with no need of dialogue state trackers, it overfits to the training dataset impacting the accuracy and fluent generation of responses. Our model contrasts with this work on designing a a memory augmented model that deals directly with overfitting and require smaller memory size, as shown in our experiments. The regularization of neural networks is also an important problem which have proven to be effective to control overfitting and generate sparse activations during training. proposes the regularization of state transitions in a recurrent neural network, but the notion of memory is still internal and individual memories cannot be addressed. The popular dropout technique works in hidden layers as a form of model averaging. In contrast to , our memory dropout is a delayed (age-sensitive) regularization mechanism that works at the level of memory entries and not individual activations. To the best of our knowledge, our is the first work that addresses the regularization of memory networks and prove its effectivity in a challenging task such as automatic dialogue response. Memory Dropout is a technique for improving memory augmented neural networks by breaking co-adaptating memories built during backpropagation. While conventional dropout works at the level of individual activations, our memory dropout deals with latent representations of the input. These arrays of activations are stored into an external memory module which resembles areas of the human brain that are content-addressable and sensitive to semantic information. Central to this technique is the idea that age and uncertainty play important roles to regularize the addressable keys of an external memory module that is persistent across training examples. By doing this, we obtain higher BLEU and Entity F1 scores when training a task-oriented dialogue agent that decodes an answer considering the entries of KB stored in the memory module.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJl7tREFvr
Conventional memory networks generate many redundant latent vectors resulting in overfitting and the need for larger memories. We introduce memory dropout as an automatic technique that encourages diversity in the latent space.
made a connection between Nesterov's method and an ordinary differential equation (ODE). We show if a Hessian damping term is added to the ODE from , then Nesterov's method arises as a straightforward discretization of the modified ODE. Analogously, in the strongly convex case, a Hessian damping term is added to Polyak's ODE, which is then discretized to yield Nesterov's method for strongly convex functions. Despite the Hessian term, both second order ODEs can be represented as first order systems. Established Liapunov analysis is used to recover the accelerated rates of convergence in both continuous and discrete time. Moreover, the Liapunov analysis can be extended to the case of stochastic gradients which allows the full gradient case to be considered as a special case of the stochastic case. The is a unified approach to convex acceleration in both continuous and discrete time and in both the stochastic and full gradient cases. made a connection between Nesterov's method for a convex, L-smooth function, f, and the second order, ordinary differential equation (ODE) x + 3 tẋ + ∇f (x) = 0 (A-ODE) did not show that Nesterov's method arises as a discretization of (A-ODE). In order to obtain such a discretization, we consider the following ODE, which has an additional Hessian damping term with coefficient 1/ √ L. DISPLAYFORM0 Notice that (H-ODE) is a perturbation of (A-ODE), and the perturbation goes to zero as L → ∞. Similar ODEs have been studied by BID1, they have been shown to accelerate gradient descent in continuous time in.Next, we consider the case where f is also µ-strongly convex, and write C f:= L/µ for the condition number of f. Then Nesterov's method in the strongly convex case arises as discretization of the following second order ODË DISPLAYFORM1 (H-ODE-SC) is a perturbation of Polyak's ODE x + 2 √ µẋ + ∇f (x) = 0 which is accelerates gradient when f is quadratic see .In each case, both continuous and discrete, as well and convex and strongly convex, it is possible to provide a proof of the rate using a Liapunov function. These proofs are already established in the literature: we give citations below, and also provide proof in the Appendix. Moreover, the analysis for Nesterov's method in the full gradient can be extended to prove acceleration in the case of stochastic gradients. Acceleration of stochastic gradient descent has been established by and BID7, see also BID8. A direct acceleration method with a connection to Nestero'v method was done by BID0. Our analysis unifies the continuous time ODE with the algorithm, and includes full gradient acceleration as a special case. The analysis proceeds by first rewriting (H-ODE) (and (H-ODE-SC)) as first order systems involving ∇f, and then replacing the ∇f with g = ∇f + e. Both the continuous and discrete time methods achieve the accelerated rate of convergence, provided |e| goes to zero quickly enough. The condition on |e|, is given below in and -it is faster than the corresponding rate for stochastic gradient descent. When e = 0 we recover the full gradient case. The renewed interested in the continuous time approach began with the work of and was followed;. Continuous time analysis also appears in BID6, BID11, and BID10. However, continuous time approaches to optimization have been around for a long time. Polyak's method is related to successive over relaxation for linear equations which were initially used to accelerate solutions of linear partial differential equations . A continuous time interpretation of Newton's method can be found in or BID1. The mirror descent algorithm of has a continuous time interpretation BID5. The Liapunov approach for acceleration had already appeared in BID4 for FISTA.The question of when discretizations of dynamical systems also satisfy a Liapunov function has been studied in the context of stabilization in optimal control BID12. More generally, studies when a discretization of a dynamical system preserves a property such as energy dissipation. Despite the Hessian term, (H-ODE-SC) can be represented as the following first order system. Lemma 2.1. The second order ODE (H-ODE) is equivalent to the first order system DISPLAYFORM0 Proof. Solve for v in the first line of (1st-ODE) DISPLAYFORM1 Insert into the second line of (1st-ODE) DISPLAYFORM2 Simplify to obtain (H-ODE).The system (1st-ODE) can be discretized using the forward Euler method with a constant time step, h, to obtain Nesterov's method. Definition 2.2. Define y k as the following convex combination of x k and v k. DISPLAYFORM3 Let h > 0 be a given small time step/learning rate and let t k = h(k + 2). The forward Euler method for (1st-ODE) with gradients evaluated at y k is given by DISPLAYFORM4 Remark 2.3. The forward Euler method simply comes from replacingẋ with (x k+1 − x k)/h and similarly for v. Normally the velocity field is simply evaluated at x k, v k. The only thing different about (FE-C) from the standard forward Euler method is that ∇f is evaluated at y k instead of x k. However, this is still an explicit method. More general multistep methods and one leg methods in this context are discussed in.Recall the standard Nesterov's method from Nesterov (2013, Section 2.2) DISPLAYFORM5 Theorem 2.4. The discretization of (H-ODE) given by (FE-C) with h = 1/ √ L and t k = h(k+2) is equivalent to the standard Nesterov's method (Nest). Eliminate the variable v using to obtain (Nest). Now we consider µ-strongly convex, and L-smooth functions, f, and write C f:= L µ for the condition number. We first show that (H-ODE-SC) can be represented as a first order system. Lemma 2.5. The second order ODE (H-ODE-SC) is equivalent to the first order system DISPLAYFORM0 Proof. Solve for v in the first line of (1st-ODE-SC) DISPLAYFORM1 Insert into the second line of (1st-ODE-SC) DISPLAYFORM2 Simplify to obtain (H-ODE-SC).System (1st-ODE-SC) can be discretized using a forward Euler method with a constant time step h to obtain Nesterov's method. Let h > 0 be a small time step, and apply the forward Euler method for (1st-ODE-SC) evaluated at y k: DISPLAYFORM3 where, DISPLAYFORM4 Now we recall the usual Nesterov's method for strongly convex functions from Nesterov (2013, Section 2.2) DISPLAYFORM5 Theorem 2.6. The discretization of (H-ODE-SC) given by (FE-SC) with h = 1/ √ L is equivalent to the standard Nesterov's method (SC-Nest). Eliminate the variable v k using the definition of y k to obtain (SC-Nest).3 LIAPUNOV ANALYSIS 3.1 CONVEX CASE: CONTINUOUS AND DISCRETE TIME Definition 3.1. Define the continuous time Liapunov function DISPLAYFORM0 where E(t, x, v) in given by. In particular, for all t > 0, DISPLAYFORM1 Furthermore, let x k, v k be given by (FE-C). Then for all k ≥ 0, DISPLAYFORM2 In particular, if DISPLAYFORM3 then E k is decreasing. When equality holds in, DISPLAYFORM4 Most of the stated above are already known, but for completeness we refer the proofs in Appendix A. Since (FE-C) is equivalent to Nesterov's method, the rate is known. The proof of the rate using a Liapunov function can be found in BID4. Refer to? which shows that we can use the constant time step. The discrete Liapunov function was used in Su et al. FORMULA5; to prove a rate.3.2 STRONGLY CONVEX CASE: CONTINUOUS AND DISCRETE TIME Definition 3.3. Define the continuous time Liapunov function E(x, v) DISPLAYFORM5 Define the discrete time Liapunov function by DISPLAYFORM6 Proposition 3.4. Let (x, v) be the solution of (1st-ODE-SC), then DISPLAYFORM7 In particular, for all t > 0, DISPLAYFORM8, we have DISPLAYFORM9 In particular, for h = DISPLAYFORM10 The discrete Liapunov function E k was used to prove a rate in the strongly convex case by. The proof of can be found in Wilson et al. (2016, Theorem 6). For completeness we also provide the proof in Appendix E. In the appendix we present in continuous and discrete time for (non-accelerated) stochastic gradient descent. We also present in continuous time for the stochastic accelerated case in the Appendix. We present the in discrete time here. In this section we consider stochastic gradients, which we write as a gradient plus an error term DISPLAYFORM0 The stochastic gradient can be abstract, or it can error be a mini-batch gradient when f is a sum. Moreover, we can include the case where DISPLAYFORM1 corresponding to a correction by a snapshot of the full gradient at a snapshot location, which is updated every m iterations, as inJohnson &. The combination of gradient reduction and momentum was discussed in.In order to obtain the accelerated rate, our Liapuonov analysis requires that the |e i | be decreasing fast enough. This can also be accomplished in the minibatch setting by using larger minibatches. In this case, the rate of decrease of e i required gives a schedule for minibatch sizes. A similar was obtained in.When we replace gradients with the Forward Euler scheme (FE-C) becomes DISPLAYFORM2 where y k is given by, h is a constant time step, and t k:= h(k + 2). In Appendix C, we study the continuous version of (Sto-FE-C) and obtain a rate of convergence using a Liapunov function. Definition 4.1. Define the discrete stochastic Liapunov functionẼ k:= E k + I k, for k ≥ 0, where E k is given by and and, e −1:= 0 and for k ≥ 0, DISPLAYFORM3 Theorem 4.2. Assume that the sequence e k satisfies DISPLAYFORM4 We immediately have the following . DISPLAYFORM5 Remark 4.4. The assumption on e k is satisfied, for example, by a sequence of the form |e k | = 1/k α for any α > 2. By comparison for SGD, the corresponding condition is satisfied by such sequences with α > 1. Thus the norm of the noise needs to go to zero faster for accelerated SGD compared to regular SGD (see Appendix B) in order to obtain the rate. Remark 4.5. In Theorem 4.2, we focus on the maximum possible time step h = 1/ √ L. The is still true if we shrink the time step. In this case, I k can be defined using the tails h DISPLAYFORM6 *, e i−1, see. In this section, we consider that stochastic gradient, which we write as a gradient plus an error, as in section 4.1. In Appendix B.2, we study the Stochastic gradient descent and Appendix C.2 is devoted to the analysis of the continuous framework of Stochastic Accelerated method. The Forward Euler scheme (FE-SC) becomes DISPLAYFORM0 where e k is a given error and DISPLAYFORM1 Inspired by the continuous framework (Appendix C.2), we define a discrete Lyapunov function. Definition 4.6. DefineẼ k:= E k + I k, where E k is given by and DISPLAYFORM2 with the convention e −1 = 0.Then we obtain the following convergence for sequences generated by (Sto-FE-SC). Theorem 4.7. Let x k, v k be two sequences generated by the scheme (Sto-FE-SC) with initial con- DISPLAYFORM3 Then,Ẽ DISPLAYFORM4 In addition, sup i≥0 |v i − x * | ≤ M for a positive constant M and DISPLAYFORM5 We include the proof of Theorem 4.7 since this is new. Proof of Theorem 4.7. First we prove that DISPLAYFORM6 For the term I k, we obtain DISPLAYFORM7 Putting all together, we obtaiñ DISPLAYFORM8 And by definition of v k+1 − v k, we havẽ DISPLAYFORM9 We conclude, as in the convex case, applying discrete Gronwall Lemma and. DISPLAYFORM10 The proof is concluded by convexity, DISPLAYFORM11 Proof of Proposition 3.4. Using (1st-ODE-SC), we obtain DISPLAYFORM12 By strong convexity, we have DISPLAYFORM13 Let e: [0, +∞) → R d be a integrable function. Consider the gradient descenṫ DISPLAYFORM0 Then define the Lyapunov function,Ẽ, bỹ DISPLAYFORM1 where, DISPLAYFORM2 and, DISPLAYFORM3 Then the following holds. Proposition B.1. Let x be a solution of with initial condition x 0. Then, DISPLAYFORM4 Proof. For all t > 0, we have DISPLAYFORM5 Then, since f is convex, we obtain the first . We deduce thatẼ is decreasing. Arguing as along with the co-coercivity inequality, we prove that sup s≥0 |x(s) − x * | < +∞, sup s≥0 s|∇f (x(s))| < +∞ which concludes the proof. The discretization of FORMULA5 is DISPLAYFORM6 where e k = e(hk). DISPLAYFORM7 and, DISPLAYFORM8 Proposition B.2. Let x k be the sequence generated by with initial condition x 0. Assume that h satisfies, for all k ≥ 0, DISPLAYFORM9 Proof. By L-smoothness and convexity of f, we have DISPLAYFORM10 In addition, DISPLAYFORM11 when h satisfies. We conclude the proof with the same argument as Proposition B.1. Let us study the equationẋ DISPLAYFORM0 for an error function, e satisfying +∞ 0 e µs |e(s)| ds < +∞.This condition on the error function is classical. The case e = 0 is satisfied trivially and corresponds to the gradient descent ODE.We define the function DISPLAYFORM1 where, DISPLAYFORM2 Then we have the following . Proposition B.3. Let x be a solution of with initial data x 0 and suppose that e satisfies. DISPLAYFORM3 In addition, sup t≥0 |x − x * | < +∞ and DISPLAYFORM4 Therefore E(t, x(t)) is decreasing and then for all t > 0, DISPLAYFORM5 By Gronwall Lemma and FORMULA5, we deduce that sup t≥0 |x − x * | < +∞ and the proof is concluded. The discretization of is DISPLAYFORM6 where e k = e(hk). We define E k, for k ≥ 1, by DISPLAYFORM7 where, DISPLAYFORM8 with the notation e −1 = 0. DISPLAYFORM9 In addition, if the sequence (1 − hµ) −i |e i | is summable, sup i≥1 |x i − x * | < +∞ and we deduce, DISPLAYFORM10 Proof. First, as usual, we have DISPLAYFORM11 In addition, DISPLAYFORM12 Combining these two inequalities, DISPLAYFORM13 In order to conclude, we also need to establish that E k is bounded below. That follows from discrete Gronwall's inequality, as was already done in the continuous case in Proposition B.3. In this section, we consider that an error e(t) is made in the calculation of the gradient. We study the following perturbation of system (1st-ODE), DISPLAYFORM0 where e is a function satisfying DISPLAYFORM1 The corresponding ODE is DISPLAYFORM2 We follow the argument from Attouch et al. (2016, section 5) to define a Lyapunov function for this system. LetẼ be defined byẼ DISPLAYFORM3 where, DISPLAYFORM4 and DISPLAYFORM5 Lemma C.1. Let (x, v) be a solution of (Sto-1st-ODE) with initial condition (x, v) = (x 0, v 0) and suppose that e satisfies. Then DISPLAYFORM6 In addition, sup t≥0 |v(t) − x * | < +∞ and sup t≥0 |t∇f (x)| < +∞.Proof. Following the proof of Proposition 3.2, we have DISPLAYFORM7 In particular,Ẽ is decreasing and DISPLAYFORM8 Using the inequality of co-coercitivity, we obtain 1 2L DISPLAYFORM9 Using FORMULA5, we conclude applying Gronwall Lemma. Then we deduce Proposition C.2. Let (x, v) be a solution of (Sto-1st-ODE) with initial condition (x, v) = (x 0, v 0) and suppose that e satisfies. Then, DISPLAYFORM10 Define the perturbed system of (1st-ODE-SC) by DISPLAYFORM0 where e is a locally integrable function. Definition C.3. Define the continuous time Liapunov function E(x, v) DISPLAYFORM1 Define the perturbed Liapunov functionẼ, bỹ E(t, x, v):= E(x, v) + I(t, x, v), DISPLAYFORM2 Lemma C.5. Suppose f is bounded from below and s → e √ µs e(s) ∈ L 1. Let (x, v) be a solution of (Sto-1st-ODE-SC), then sup t≥0 |v(t) − x * | < +∞ and sup t≥0 |∇f (x)| < +∞.Proof. Same as Attouch et al. (2016, Lemma 5.2), using the fact that DISPLAYFORM3 ) is decreasing and Gronwall's inequality. Then, combining the two previous , we obtain: Corollary C.6. Suppose that s → e λs e(s) is a L 1 (0, +∞) function. Let (x, v) be a solution of (Sto-1st-ODE-SC) with initial condition (x, v) = (x 0, v 0). Then, DISPLAYFORM4 where, DISPLAYFORM5 Proof. By Proposition C.4 and Gronwall's Lemma, we havẽ DISPLAYFORM6 This is equivalent to DISPLAYFORM7 which concludes the proof with Lemma C.5. First, using the convexity and the L-smoothness of f, we obtain the following classical inequality (see or in the case e k = 0), DISPLAYFORM0 Then, we have DISPLAYFORM1 By defintion of v k+1, we have DISPLAYFORM2 In addition, DISPLAYFORM3 Combining these three previous inequalities, we obtaiñ DISPLAYFORM4, we deduce thatẼ k is decreasing. In particular, DISPLAYFORM5 and the discrete version of Gronwall Lemma gives the since (i+3)|e i | is a summable sequence due to.E PROOF OF PROPOSITION 3.4To simplify, we denote λ h = h √ µ 1+h √ µ. Note, however, since the gradients are evaluated at y k, not x k, the first step is to use strong convexity and L-smoothness to estimate the differences of E in terms of gradients evaluated at y k. Lemma E.1. Suppose that f is a µ-stgrongly convex and L-smooth function, then DISPLAYFORM6 Proof. First, we remark that DISPLAYFORM7 Since the first line of (1st-ODE-SC) can be rewritten as DISPLAYFORM8 we obtain.Proof of Proposition 3.4. Once FORMULA25 is established, since the expression on the right hand side is monotone in h, the largest choice of h is given by h = 1 √ L, which leads immediately to.In the proof we will estimate the linear term y k − x k, ∇f (y k) in terms of y k − x *, ∇f (y k) plus a correction which is controlled by the gap (the negative quadratic) in and the quadratic term in E.The second term in the Liapunov function gives, using 1-smoothness of the quadratic term in E. µ DISPLAYFORM9
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJMINj05tQ
We derive Nesterov's method arises as a straightforward discretization of an ODE different from the one in Su-Boyd-Candes and prove acceleration the stochastic case
We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset. L2TL considers joint optimization of vastly-shared weights between models for source and target tasks, and employs adaptive weights for scaling of constituent losses. The adaptation of the weights is based on reinforcement learning, guided with a performance metric on the target validation set. We demonstrate state-of-the-art performance of L2TL given fixed models, consistently outperforming fine-tuning baselines on various datasets. In the regimes of small-scale target datasets and significant label mismatch between source and target datasets, L2TL outperforms previous work by an even larger margin.
[ 1, 0, 0, 0, 0 ]
H1l0e6VKDS
We propose learning to transfer learn (L2TL) to improve transfer learning on a target dataset by judicious extraction of information from a source dataset.
In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters. We address the problem of reinforcement learning (RL) in tasks that require long-term memory. While many successes of Deep RL were achieved in settings that are (near) fully observable, such as Atari games , partial observability requires memory to recall prior observations that indicate the current state. Relying on full observability severely limits the applicability of such approaches. For example, many tasks in virtual and physical environments are naturally observed from a first-person perspective , which means that an agent may need to seek out and remember task-relevant information that is not immediately observable without directly observing the entire environment. Recent research has started to address this issue, but effective learning in RL settings with long sequential dependencies remain a key challenge in Deep RL (; ;). The currently most common approach to RL in partially observable settings relies on models that use memory components that were originally developed for tasks like those that occur in natural language processing (NLP), e.g., LSTMs and GRUs . first demonstrated benefits of LSTMs in RL tasks designed to test memory, and these and similar approaches have become common in Deep RL , including multi-agent RL . In this work, we demonstrate that the characteristics of RL can severely impede learning in memory models that are not specifically designed for RL, and propose new models designed to tackle these challenges. For example, LSTMs excel in NLP tasks where the order of observations (characters or words) is crucial, and where influence between observations decays quickly with distance. Contrast this with a hypothetical RL example where an agent must discover a hidden passcode to escape a locked dungeon. The order of observations is highly dependent on the agent's path through the dungeon, yet when it reaches the door, only its ability to recall the passcode is relevant to escaping the dungeon, irrespective of when the agent observed it and how many observations it has seen since. Figure 1 illustrates the problem. Even in the simplified case where stochasticity is introduced by observation noise, the sample efficiency of LSTMs decreases drastically. We show that this problem occurs not just for LSTMs, but also for stacked LSTMs and DNCs , which have been widely applied in RL, and propose solutions that address this problem. trained on a noise-free (T-L, left) and noisy (T-LN, right) TMaze tasks. In both cases, the agent must recall a signal from memory. LSTM completely fails in the noisy setting while AMRL-Max learns rapidly. (68% confidence interval over 5 runs, as for all plots.) We make the following three contributions. First, in Section 3, we introduce our approach, AMRL. AMRL augments memory models like LSTMs with aggregators that are substantially more robust to noise than previous approaches. Our models combine several innovations which jointly allow the model to ignore noise while maintaining order-variant information as needed. Further, AMRL models maintain informative gradients over very long horizons, which is crucial for sample-efficient learning in long-term memory tasks (; ;). Second, in Section 5, we systematically evaluate how the sources of noise that affect RL agents affect the sample efficiency of AMRL and baseline approaches. We devise a series of experiments in two domains, a symbolic maze domain and 3D mazes in the game Minecraft. Our show that AMRL can solve long-term memory tasks significantly faster than existing methods. Across tasks our best model achieves an increase in final average return of 9% over baselines with far more parameters and 19% over LSTMs with the same number of parameters. Third, in Section 6 we analytically and empirically analyze the characteristics of our proposed and baseline models with the aim to identify factors that affect performance. We empirically confirm that AMRL models are substantially less susceptible to vanishing gradients than previous models. We propose to additionally analyze memory models in terms of the signal-to-noise ratio achieved at increasing distances from a given signal, and show that AMRL models can maintain signals over many timesteps. Jointly, the of our detailed analysis validate our modeling choices and show why AMRL models are able to effectively solve long-term memory tasks. External Memory in RL. In the RL setting, the work on external memory models is perhaps most relevant to our own. introduce a memory network to store a fixed number of prior memories after encoding them in a latent space and validate their approach in Minecraft, however the models do not focus on long term memory and are limited to fixed-length memories (i.e., the past 30 frames. The Neural Map of is similar to our work in that it provides a method in which past event are not significantly harder to learn than recent events. However, it is special-cased specifically for agents on a 2D grid, which is more restrictive than our scope of assumptions. Finally, the Neural Turing Machine (NTM) and its successor the Differentiable Neural Computer (DNC) have been applied in RL settings. They use an LSTM controller and attention mechanisms to explicitly write chosen memories into external memory. Unlike the DNC which is designed for algorithmic tasks, intentionally stores the order of writes, and induces sparsity in memory to avoid collisions, we write memories into orderinvariant aggregation functions that provide benefits in noisy environments. We select the DNC, the most recent and competitive prior approach, for baseline comparisons. Other Memory in RL. A second and orthogonal approach to memory in Deep RL is to learn a separate policy network to act as a memory unit and decide which observations to keep. These approaches are generally trained via policy gradient instead of back-propagation through time (BPTT) (; ; ; ;). These approaches are often difficult to train and are orthogonal to our work which uses BPTT. The Low-Pass RNN uses a running average similar to our models. However, they only propagate gradients through short BPTT truncation window lengths. In fact they show that LSTMs outperform their method when the window size is the whole episode. Since we are propagating gradients through the whole episode, we use LSTMs as a baseline instead.;; propose the use of self-supervised auxiliary losses, often relating to prediction of state transitions, to force historical data to be recorded in memory. Along this line, model-based RL has also made use of memory modules to learn useful transition dynamics instead of learning a policy gradient or value function . These are orthogonal and could be used in conjunction with our approach, which focuses on model architecture. Finally, several previous works focus on how to deal with storing initial states for truncated trajectories in a replay buffer . Memory in Supervised Learning. In the supervised setting, there has also been significant work in memory beyond LSTM and GRUs. Similar to our work, , , and use a running average over inputs of some variety. use an RNN in conjunction with running averages. use the average to provide context to the RNN (we do the inverse), all use an exponential decay instead of a non-decaying average. Additionally, there have been myriad approaches attempting to extend the range of RNNs, that are orthogonal to our work, given that any could be used in conjunction with our method as a drop-in replacement for the LSTM component (; ; ; ;). Other approaches for de-noising and attention are proposed in (; Wöllmer et al., 2013; ;) but have runtime requirements that would be prohibitive in RL settings with long horizons. Here, we limit ourselves to methods with O runtime per step. We consider a learning agent situated in a partially observable environment denoted as a Partially Observable Markov Decision Process (POMDP) . We specify this process as a tuple of (S, A, R, P, O,Ω, γ). At time step t, the agent inhabits some state, s t ∈ S, not observable by the agent, and receives some observation as a function of the state o t ∈ Ω ∼ O(o t |s t): Ω × S → R ≥0. O is known as the observation function and is one source of stochasticity. The agent takes some action a t ∈ A. The POMDP then transitions to state s t+1 ∼ P(s t+1 |s t, a t): S × A × S → R ≥0, and the agent receives reward r t = R(s t, a t): S × A → R and receives a next observation o t+1 ∼ O upon entering s t+1. The transition function P also introduces stochasticity. The sequence of prior observations forms an observation trajectory τ t ∈ Ω t ≡ T. To maximize t=∞ t=0 γ t r t, the agent chooses each discrete a t from a stochastic, learned policy conditioned on trajectories π(a t |τ t): T × A →. Given P, O, and π itself are stochastic, τ t can be highly stochastic, which we show can prevent learning. In this section we introduce our model AMRL, and detail our design choices. At a high level, our model uses an existing memory module to summarize context, and extends it to achieve desirable properties: robustness to noise and informative gradients. LSTM base model. We start from a standard memory model as shown in Figure 2a. We use the base model to produce a contextual encoding of the observation o t that depends on the sequence of prior observations. Here, we use several feed-forward layers, F F 1 (defined in A.5), followed by an LSTM (defined in A.6) layer: The encoded observation h t = LST M t (e t) // The output of the LSTM Previous work proposed a stacked approach that combines two (or more) LSTM layers (Figure 2b) with the goal of learning higher-level abstractions . A key limitation of both LSTM and LSTM STACK approaches is susceptibility to noise. Noise Aggregator Definition Jacobian ST Jacobian SNR can be introduced in several ways, as laid out in Section 3.1. First, observation noise introduces variance in the input o t. Second, as motivated by our introductory example of an agent exploring a dungeon, variance on the level of the trajectory τ t is introduced by the transition function and the agent's behavior policy. Recall that in our dungeon example, the agent encounters many irrelevant observations between finding the crucial passcode and arriving at the door where the passcode allows escape. This variance in τ t generally produces variance in the output of the function conditioning on τ t. Thus, although the first part of our model makes use of an LSTM to encode previous inputs, we expect the output, h t to be sensitive to noise. Aggregators. To address the issue of noise highlighted above, we introduce components designed to decrease noise by allowing it to cancel. We we call these components aggregators, labeled M in Figures 2c and 2d. An aggregator is a commutative function that combines all previous encodings h t in a time-independent manner. Aggregators are computed dynamically from their previous value: ] denotes the first half of h t, and g denotes the aggregator function, the choices of which we detail below. All proposed aggregators can be computed in constant time, which in an overall memory model that matches the computational complexity of LSTMs. This is crucial for RL tasks with long horizons. In this work we consider the SUM, AVG, and MAX aggregators defined in Table 1. All three are easy to implement in standard deep learning frameworks. They also have desirable properties in terms of gradient flow (Jacobian in Table 1 and signal-to-noise ratio (SNR)) which we detail next. Aggregator signal-to-noise ratio (SNR). Our primary design goal is to design aggregators that are robust to noise from the POMDP, i.e., variation due to observation noise, behavior policy or environment dynamics. For example, consider the outputs from previous timesteps, h t, to be i.i.d. vectors. If we use the average of all h t as our aggregators, then the variance will decrease linearly with t. To formally and empirically assess the behavior of memory models in noisy settings, we propose the use of the signal-to-noise ratio or SNR . The SNR is a standard tool for assessing how well a system maintains a given signal, expressed as the ratio between the signal (the information stored regarding the relevant information) and noise. In this section, to maintain flow, we simply state the SNR that we have analytically derived for the proposed aggregators in Table 1. We note that the SNR decays only linearly in time t for SUM and AVG aggregators, which is empirically slower than baselines, and has a bound independent of time for the MAX aggregator. We come back to this topic in Section 6, where we describe the derivation in detail, and provide further empirical that allow comparison of all proposed and baseline methods. Aggregator gradients. In addition to making our model robust to noise, our proposed aggregators can be used to tackle vanishing gradients. For example, the sum aggregator can be viewed as a residual skip connection across time. We find that several aggregators have this property: given that the aggregator does not depend on the order of inputs, the gradient does not decay into the past for a fixed-length trajectory. We can show that for a given input x i and a given output o, the gradient dot dxi (or expected gradient) of our proposed aggregators does not decay as i moves away from t, for a given t. We manually derived the Jacobian column of Table 1 to show that the gradient does not depend on the index i of a given past input. We see that the gradient decays only linearly in t, the current time-step, for the AVG and MAX aggregators. Given that the gradients do not vanish when used in conjunction with h t as input, they provide an immediate path back to each h t through which the gradient can flow. SET model. Using an aggregator to aggregate all previous o t yields a novel memory model that we term SET (Figure 2c). This model has good properties in terms of SNR and gradient signal as shown above. However, it lacks the ability to maintain order-variant context. We address this limitation next, and include the SET model as an ablation baseline in our experiments. Combining LSTM and aggregator. In our AMRL models, we combine our proposed aggregators with an LSTM model that maintains order-dependent memories, with the goal to obtain a model that learns order-dependent and order-independent information in a manner that is data efficient and robust to noise. In order to achieve this, we reserve certain neurons from h t, as indicated by'/' in Figure 2d. We only apply our aggregators to the one half of h t. Given that our aggregators are commutative, they lose all information indicating the context for the current time-step. To remedy this, we concatenate the other half of h t onto the aggregated memory. The final action is then produced by several feed-forward layers, F F 2 (defined in A.5): The action output by the network Straight-through connections. We showed above that our proposed aggregators provide advantages in terms of maintaining gradients. Here, we introduce a further modification that we term straight-through (ST) connections, designed to further improve gradient flow. Given that our proposed aggregators are non-parametric and fairly simple functions, we find that we can deliberately modify their Jacobian as follows. We pass the gradients straight through the model without any decay. Thus, in our ST models, we modify the Jacobian of g and set it to be equal to the identity matrix (as is typical for non-differentiable functions). This prevents gradients from decaying at all, as seen in the ST Jacobian column of Table 1. Our proposed models combine the components introduced above: AMRL-Avg combines the AVG aggregator with an LSTM, and uses a straight-through connection. Similarly, AMRL-Max uses the MAX aggregator instead. Below we also report ablation for SET which uses the AVG aggregator without an LSTM or straight-through connection. In Section 5 we will see that all components of our model contribute to dramatically improved robustness to noise compared to previous models. We further validate our design choices by analysing gradients and SNR of all models in Section 6. We examine the characteristics of all proposed and baseline models through a series of carefully constructed experiments. In all experiments, an RL agent (we use PPO , see Appendix A.5) interacts with a maze-like environment. Similar mazes were proposed by and in order to evaluate long term memory in RL agents. Visual Minecraft mazes have been previously used to evaluate fixed-length memory in. We compare agents that use one of our proposed AMRL models, or one of the baseline models, as drop-in replacement for the combined policy and value network. (A single network computes both by modifying the size of the output.) Baselines are detailed in Section 4.3. 1 Note that ft is applied to the vector mt. Although the sum and max aggregator can be computed using the function g(x, y) = sum(x, y) and g(x, y) = max(x, y) respectively, the average must divide by the time-step. Thus, we set ft(x) = In this set of tasks, an agent is placed in a T-shaped maze, at the beginning of a long corridor, as shown in Figure 3 (a) (here green indicates both the start state and the indicator). The agent must navigate to the end of the corridor (purple) and faces a binary decision task. It must step left or right according to the start indicator it observed which requires memory to retain. In all TMaze tasks, the agent receives the observation as a vector, indicating whether at the start of the corridor, whether at the end, and the color of the indicator if present. Our experiments use the following variants of this task (see Appendix A.1 for additional detail): TMaze Long (T-L) Our base task reduces to a single decision task: the agent is deterministically stepped forward until it reaches the end of the corridor where it must make a decision based on the initial indicator. Corridor and episode length is 100. Reward is 4 for the correct action, and -3 otherwise. This task eliminates exploration and other noise as a confounding factor and allows us to establish base performance for all algorithms. TMaze Long Noise (T-LN) To test robustness to noise, observations are augmented by a random variable n ∈ {−1, 1}, sampled uniformly at random. The variable n is appended to the obervation, which is vector-valued. Other details remain as in T-L. TMaze Long-Short (T-LS) Our hardest TMaze task evaluates whether additional short-term tasks interfere with a memory model trying to solve a long-term memory task. We add an intermediate task: we append n ∈ {−1, 1}, sampled uniformly at random, to the input and only allow the agent to progress forward if its discrete action a ∈ {−1, 1} matches. Corridor length is still 100. A maximum episode length of 150 is imposed given the introduction of exploration. In order to test that our approach generalizes to high dimensional input, we create multiple Minecraft environments using the open-source Project Malmo . Compared to the previous setup, we replace fully-connected F F 1 layers with convolutional layers to allow efficient feature learning in the visual space. (See Appendix A.5 for details.) As in , we use discrete actions. We allow movement in: {North, East, South West}. We use the following task variants: MC Long-Short (MC-LS) Our first Minecraft environment tests agents' ability to learn short and long-term tasks -which adds the need to process video observations to the T-LS task. The agent encounters an indicator, then must navigate through a series of rooms (see Fig. 3(b) ). Each room contains either a silver or blue column, indicating whether the agent must move around it to the left or right to get a reward. At the end, the agent must remember the indicator. There are 16 rooms total, each requiring at least 6 steps to solve. The episode timeout is 200 steps. MC Long-Short-Ordered (MC-LSO) This task tests whether models can learn policies conditioned on distant order-dependencies over two indicators. The two indicators can each be green or red. Only a green followed by red indicates that the goal is to the right at the end. There are 10 rooms with a timeout of 200 steps. MC Long-Short-Noise (MC-LSN) This task starts from MC-LS and adds observation noise to test robustness to noise while learning a short and long-term task. For each visual observation we add Gaussian noise to each (RGB) channel. An example observation is shown in Figure 3 (e). There are 10 rooms with a timeout of 200 steps. We compare the following approaches: AMRL-Max. Our method with the MAX aggregator (Fig. 2d). AMRL-Avg. Our method with the AVG aggregator (Fig. 2d). SET. Ablation: AMRL-Avg without LSTM or straight-through connection (Fig. 2c). LSTM. The currently most common memory model (Fig. 2a). LSTM STACK. Stacks two LSTM cells for temporal abstraction (Fig. 2b). DNC. A highly competitive existing baseline with more complex architecture . Our main are provided in Figures 4 and 5. We start by analyzing the T-L . In this base task without noise or irrelevant features, we expect all methods to perform well. Indeed, we observe that all approaches are able to solve this task within 50k environment interactions. Surprisingly, the LSTM and stacked LSTM learn significantly slower than alternative methods. We hypothesize that gradient information may be stronger for other methods, and expand on this in Section 6. We observe a dramatic deterioration of learning speed in the T-LN setting, which only differs from the previous task in the additional noise features added to the state observations. LSTM and DNC are most strongly affected by observation noise, followed by the stacked LSTM. In contrast, we confirm that our proposed models are robust to observation noise and maintain near-identical learning speed compared to the T-L task, thus validating our modeling choices. Finally, we turn to T-LS, which encapsulates a full RL task with observation features only relevant for the short-term task (i.e. long-term noise induced by short-term features), and noise due to exploration. Our proposed models, AMRL-Avg and ARML-Max are able to achieve returns near the optimal 13.9, while also learning fastest. All baseline models fail to learn the long-term memory task in this setting, achieving returns up to 10.4. The MC-LS task translates T-LS to the visual observation setting. The agent has to solve a series of short term tasks while retaining information about the initial indicator. As before, we see AMRLMax and AMRL-Avg learn the most rapidly. The DNC model learns significantly more slowly but eventually reaches optimal performance. Our SET ablation does not learn the task, demonstrating that both the order-invariant and order-dependent components are crucial parts of our model. The MC-LSO adds a strong order dependent task component. Our show that the AMRLMax model and DNC model perform best here -far better than an LSTM or aggregator alone. We note that this is the only experiment where DNC performs better than AMRL-Max or AMRL-Avg. Here the optimal return is 10.2 and the optimal memory-less policy return is 8.45. We speculate that DNC is able to achieve this performance given the shorter time dependency relative to MC-LS and the lower observation noise relative to MC-LSN. Finally, MC-LSN adds noise to the visual observations. As expected, the LSTM and LSTM STACK baselines completely fail in this setting. Again, AMRL-Max and AMRL-Avg learn fastest. In this task we see a large advantage for AMRL methods relative to methods with LSTMs alone, suggesting that AMRL has a particular advantage under observation noise. Moreover, we note the strong performance of AMRL-Max, despite the large state-space induced by noise, which affects the SNR bound. DNC is the baseline that learns best, catching up after 300k environment interactions. Given the strong empirical performance of our proposed method, here we analyze AMRL to understand its characteristics and validate model choices. Here we show that the proposed methods, which use different aggregators in conjunction with LSTMs, do not suffer from vanishing gradients , as discussed in Section 3. Our estimates are formed as follows. We set the model input to 1 when t = 0 and to 0 for timesteps t > 0. We plot avg(d d), where d = 1 T dgt dxi over t. Samples are taken every ten steps, and we plot the average over three independent model initializations. Results over time, and for clarity, the final strength of the gradient, are summarized in Figure 6. We observe that the AMRL-Max and AMRL-AVG (and SUM) models have the same large gradient. Our models are followed by DNC 2, which in turn preserves gradient information better than LSTMs. The obtained here help explain some of the empirical performance we observe in Section 4, especially in that LSTMs are outperformed by DNC, with our models having the greatest performance. However, the gradients are similar when noise is introduced (See Appendix A.4), indicating that this does not fully explain the drop in performance in noisy environments. Moreover, the gradients alone do not explain the superior performance of MAX relative to AVG and SUM. We suggest an additional analysis method in the next subsection. Following the discussion in Section 3, we now quantify SNR empirically to enable comparison across all proposed and baseline models. We follow the canonical definition to define the SNR of a function over time: where t denotes time, f t is a function of time, s t is a constant signal observed at t, and n t is a random variable representing noise at time t. Given this definition, we can derive the SNR analytically for our proposed aggregators. (See Appendix A.3 for details). Analytical are shown in the last column of Table 1. We see that the AVG and SUM aggregators have the same SNR, and that both decay only linearly. (Empirically, we will see LSTMs induce exponential decay.) Moreover, we see that Max has a lower bound that is independent of t. Although the bound does depend on the size of the observation space, we observed superior performance even in large state spaces in the experiments (Section 5). We now turn to an empirical estimate of SNR. In addition to the analytic presented so far, empirical estimates allow us to assess SNR of our full AMRL models including the LSTM, and compare to baselines. Our empirical analysis compares model response under an idealized signal to that under idealized noise using the following procedure. The idealized signal input consists of a single 1 vector (the signal) followed by 0 vectors, and the noisy input sequence is constructed by sampling from {0, 1} uniformly at random after the initial 1. Using these idealized sequences we compute the SNR as per Eq. 1. We report the average SNR over each neuron in the output. We estimate E[s 2] and E[n 2] over 20 input sequences, for each of 3 model initializations. The show that AMRL-Max, Max, and the baseline DNC have the highest SNR. The lowest SNR is observed for LSTM and LSTM STACK. The decay for both LSTM models is approximately exponential, compared to roughly linear decay observed for all other models. This empirical matches our derivations in Table 1 for our proposed models. We observe that the SNR for LSTMs strongly depends on the time at which a given signal occurred, while our Max models and DNC are not as susceptible to this issue. The in the previous section indicate that models that perform well on long-term memory tasks in noisy settings, such as those studied in Section 5, tend to have informative gradients and high SNR over long time horizons. In this section we further examine this relationship. Figure 8 shows the aggregate performance achieved by each model across the experiments presented in Section 5 and in the appendix A.2. We argue that these tasks capture key aspects of long-term memory tasks in noisy settings. We observe that our proposed AMRL-Avg and AMRL-Max approaches outperform all other methods. Ablations Max and Avg are competitive with baselines, but our demonstrate the value of the ST connection. AMRL-Max improves over the LSTM average return by 19% with no additional parameters and outperforms DNC the average return by 9% with far fewer parameters. We have shown that AMRL models are not susceptible to the drastic performance decreases in noisy environments that LSTMs and DNCs are susceptible to, and we have shown that this generalizes to an ability to ignore irrelevant features in other tasks. Figure 8(b) relates overall model performance to the quantities analyzed above, SNR and gradient strength. We find SNR and gradient strength are both integral and complementary aspects needed for a successful model: DNC has a relatively large SNR, but does not match the empirical performance of AMRL -likely due to its decaying gradients. AMRL models achieve high SNR and maintain strong gradients, achieving the highest empirical performance. The reverse holds for LSTM models. An outlier is the SUM model -we hypothesize that the growing sum creates issues when interpreting memories independent of the time step at which they occur. The max aggregator may be less susceptible to growing activations given a bounded number of distinct observations, a bounded input activation, or an analogously compact internal representation. That is, the max value may be low and reached quickly. Moreover, the ST connection will still prevent gradient decay in such a case. Overall, our analytical and empirical analysis in terms of SNR and gradient decay both validates our modeling choices in developing AMRL, and provides a useful tool for understanding learning performance of memory models. By considering both empirical measurements of SNR and gradients we are able to rank models closely in-line with empirical performance. We consider this a particularly valuable insight for future research seeking to improve long-term memory. We have demonstrated that the performance of previous approaches to memory in RL can severely deteriorate under noise, including observation noise and noise introduced by an agents policy and environment dynamics. We proposed AMRL, a novel approach designed specifically to be robust to RL settings, by maintaining strong signal and gradients over time. Our empirical confirmed that the proposed models outperform existing approaches, often dramatically. Finally, by analyzing gradient strength and signal-to-noise ratio of the considered models, we validated our model choices and showed that both aspects help explain the high empirical performance achieved by our models. In future research, we believe our models and analysis will form the basis of further understanding, and improving performance of memory models in RL. An aspect that goes beyond the scope of the present paper is the question of how to prevent long-term memory tasks from interfering with shorter-term tasks -an issue highlighted in Appendix A.2.3. Additionally, integration of AMRL into models other than the standard LSTM could be explored. Overall, our work highlights the need and potential for approaches that specifically tackle long-term memory tasks from an RL perspective. In this appendix we provide additional details of all tasks we report on in the main paper. In these experiments, the agent is initially placed at one end of a corridor. At the other end is the T-junction, where the agent can decide to move left or right. The goal state cannot be observed and is located in one of these two directions. Once the agent chooses to check a given direction, the episode terminates and the agent either receives a success or fail reward, determined by whether the agent picked the correct direction with the goal. The position of the goal is randomized between episodes and is indicated only in the very first step by an observation called the indicator. T-L is our base task where noise is minimized. The agent is automatically moved to the next position in each step to eliminate variation due to an agent's exploratory policy. Thus, there is a single decision to learn -whether to move left or right at the T-junction. In all TMaze experiments, to avoid confounding factors such as changing the length of BPTT and changing the total number of timesteps per indicator observation, we fix the length of the maze and simply move the indicator. The indicator can be placed at the beginning of the maze or at another location in the middle. The agent receives a reward of 4 for the correct action at the junction and -3 for an incorrect action at the junction. We encode observations as vectors of length 3, with the first dimension taking on the value of 1 if the agent is at the start (0 otherwise), the second dimension taking on the value of 1 or -1 corresponding to the two values of the indicator (when the agent is at the indicator, 0 otherwise), and the final dimension taking on the value of 1 if the agent is at the T-junction (0 otherwise). (For example, encodes an observation for the agent at the start with the goal placed to the right.). Unless otherwise stated, we use a timeout of 150 steps. Here we append a noise feature to the agent observation to test robustness to observation noise. Noise us sampled uniformly from {-1, 1}. This experiment is a variant of experiments proposed in where continuous valued noise was used. Here we choose discrete noise features as they allow us to build up to the short-long decision task discussed next. The short-term task that we add is to "recreate" the noise observation. More precisely, we append a dimension to the action space that allows for two actions: one representing the value 1 and the other representing -1. If this action matches the noise observation, then the agent proceeds to the next step and received a reward of 0.1. Otherwise, the agent stays where it is and the observation is recreated with the noise dimension sampled from {-1, 1}. The agent starts on an elevated platform, facing a block corresponding to a certain indicator. When on the platform, the agent must step to the right to fall off the platform and into the maze. The agent is now positioned at the southern entrance to a room oriented on the north-south axis. Stepping forward, the agent enters the room. At this point, there are columns to the agent's left and right preventing the agent from moving east or west. The agent has all of the actions (north, east, west) available, but will remain in place if stepping into a column. The agent's best choice is to move forward onto a ledge. On this ledge, the agent faces a column whose block type (diamond or iron) indicates a safe direction (east or west) to fall down. If the agent chooses correctly, it gets a positive reward of 0.1. At this point, the agent must proceed north two steps (which are elevated), fall back to the center, then north to enter the next room. At the very end (purple), the agent must go right if the initial indicators were green (green then red in the multi-step case), and left otherwise. The agent receives a reward of 4 for a correct action at the end and -3 otherwise. In addition, if the agent takes an action that progresses it to the next step, it receives a reward of 0.1 for that correct action. Unless otherwise stated, we use a timeout of 200. Here 13.7 is optimal, and 10.2 is the best possible for a memory-less policy. There are 16 rooms total, each requiring at least 6 steps each to solve. In this version of the Minecraft Maze, there is a second initial platform that the agent will land on after the first and must also step right off of to enter the maze. The options for the colors of the indicators are (green, red), (red, green), (green, green), (red, red). Of these, only the first indicates that the agent is to take a right a the end of the maze. As in the T-Maze Long-Short Ordered environment, the goal here is to see if aggregating the LSTM outputs is capable of providing an advantage over an aggregator or LSTM alone. We use only 10 rooms given the time required to solve this environment. We speculate that this environment is the only one where DNC outperforms our ST models due to the shorter length, which gives our models less of an advantage. In this version of the Minecraft Maze we start from the MC-LS task and add 0-mean Gaussian noise to each channel in our RGB observation, clipping the range of the pixels after the noise to the original range of range [-1,1]. The noise has a standard deviation of 0.05. In addition to adding noise that could affect the learning of our models, this experiment tests learning in a continuous observation space that could be problematic for the MAX aggregator. We use 10 rooms for this experiment with optimal return 10.1, and the optimal memory-less policy return is 6.6. In addition to our primary experiments presented in the main paper, we designed the following experiments to confirm that our proposed models retain the ability to learn order dependent information, using the path through the LSTM model. The expected is that learning speed and final outcome matches that of the baseline methods, and that the SET model cannot learn the orderdependent aspects of the tasks. This is indeed what we confirm. This experiment modifies the TMaze such that there are two indicators at opposite ends of the hallway. We place an indicator at position 1 and N-2 (the ends of the corridor that are just adjacent two the start and T-junction respectively). The two indicators take on one of the 4 pairs of values with equal probability: Only the first of these corresponds to the goal being placed to the left at the end. In order to solve this environment optimally, the agent must remember both of the indicators in the correct order. We expect that a single h t would need to be used to encode both, due to the order-variance, and that SET cannot perform this task. Given that the indicators on which the order depends span the length of the maze, we do not expect to see any performance differences between methods other than SET. Results are shown in Figure 9 (left) and confirm our hypothesis. SET is not able to learn the task while all other approaches correctly learn order-dependent information and solve the task optimally within at most 50k steps. In order to see whether we can learn policies conditioned on distant order-dependencies, along with irrelevant features, we extend the T-LS environment with an additional indicator, similar to that in T-LO. As above, the two indicators can take on values in the pairs: with equal probability. Only the first of these corresponds to the goal being placed to the left at the end. In this experiment, the two indicators were placed at positions 1 and 2, so that their observation does not include the start bit, which could be used to differentiate the two. Unlike in T-LO, our indicators appear at the start of the corridor and are adjacent. Here, we expect baseline methods to be less sample efficient than AMRL because gradients decay over long distances. Our in Figure 9 confirm our hypothesis. We see only AMRL-Avg and AMRL-Max are able to exceed the return of the best memory-less policy (12.15). We confirm by inspecting the individual Figure 9: Results: T-LO (left) and T-LSO (right) experiments. These are included in overall performance but not discussed in main body. Our confirm that AMRL models maintain order dependent memories while the SET ablation does not. Note: "ST" is short for "AMRL". learning curves, that both AMRL-Avg and AMRL-Max achieve reward near optimal. This is also the only setting where stacked LSTM performed worse than LSTM. Our final environment is also constructed in Minecraft and is meant to more closely emulate the Strong Signal setting with a recurring signal, as defined in A.3 below. In this environment, the agent inhabits a 4 by 3 room, starting at the south end in the middle (a on the map), and must collect as many chickens as possible from positions c1, c2, or c3 (see Figure 10, left and center), receiving a reward of 0.1 per chicken. The chicken inhabits one of three locations. The chicken starts forward one block and offset left or right by one block (c1 or c2), placed uniformly at random. The chicken is frozen in place and cannot move. Once collected, a new chicken spawns behind the agent in the middle of the room just in front of the agent's original spawn location (c3). Once that chicken is collected, the next chicken will again spawn forward and offset (c1 or c2), and the cycle continues. After 48 steps of the repeated signal, the floor changes from red or green to grey. The agent then has another 96 timesteps to collect chickens. At the end, the agent must recall the initial floor color to make a decision (turn left/right). If the agent is correct, it receives a reward of 4 and keeps all the chickens, otherwise it receives a reward of -3 and falls into lava. The agent has 5 actions: forward, backward, left, right, collect. In the final step, when the agent is asked to recall the room color, left corresponds to red, right corresponds to green, and all other actions are incorrect (reward -3). We see that all models quickly plateau at a return near 4, although 8.8 is optimal. Roll-outs indicate that all models learned to remember room color, but struggled to collect chickens. Training the best performing model, MAX, for 1.5 million steps, we saw still perfect memory in addition to good chicken collection (e.g. roll-out: https://youtu.be/CLHml2Ws8Uw), with most mistakes coming from localization failure in an entirely grey room. Chicken collection can be learned to the same extent, but is learned slightly faster, without the memory-dependant long-term objective. We derive the SNR analytically for our proposed aggregators in two different settings. The setting assumed in the main body of the paper we term the weak signal setting. In this setting we assume that the signal s takes on some initial value s 0, followed by'0's. This simulates the setting where an initial impulse signal must be remembered. Additionally, we assume that each n t is 0-mean and is sampled from the same distribution, which we call ρ. The assumptions of the weak signal setting are motivated by our POMDP from the introduction in which an agent must remember the passcode to a door. In this setting, the order of states in between the passcode (signal) and the door are irrelevant (noise). Moreover, all of our aggregators are commutative. Thus, instead of the ordered observations in a trajectory, we can consider the set of states the agent encounters. If the episode forms an ergodic process, then as the episode continues, the distribution over encountered observations will approach the stationary distribution ρ, which defines a marginal distribution over observations. Thus, it is useful to consider the case where each state is drawn not from O(o t |s t), but rather i.i.d. from ρ, for the purpose of analysis. In addition to the weak signal setting, it is worth considering a recurring signal. We term such a setting the strong signal setting. The strong signal setting assumes that the signal recurs and that the signal and noise are not present at the same time. Specifically, there is one signal observation o s, which can occur at any time; that each observation o t is drawn from ρ; the signal s t is: o s if o t = o s, and 0 otherwise; and that the noise is 0 if o t = o s, and o t otherwise. This setting would be relevant, for example, for an agent engaging in random walks at the start of training. This agent may repeatedly pass by a signal observation that must be remembered. For the weak and strong signal settings, we now analytically derive the SNR of each of our proposed aggregators (summarised previously in Table 1). Average Aggregator Signal averaging is a common way to increase the signal of repeated measurements. In the Weak Signal case, the signal decays, but only linearly. Writing s as a shorthand for s 0: Assuming 0-mean noise: In the strong signal setting, we can actually see linear improvement. In this setting, given that the signal is also a random variable, we use: where t denotes time, f t is a function dependent on f t−1, s t is a constant signal given time, and n t is a random variable representing noise at time t. Given this definition we have: From above: Note that if we do not assume that the noise is zero-mean, what we wind up with is the following: This converges to: Sum Aggregator In the weak setting: SN R(sum(s t), sum(n t)) = s In the strong signal setting: SN R(sum(s t), sum(n t)) = ρ(s)(ts) n ∼ ρ) 2 ]) = SN R(avg(s t), avg(n t)) Thus in both settings, the SNR is the same as that of the average. For the max aggregator, we find SN R(max(s t), max(n t)) to be inadequate. The reason for this is that for s t as we have previously defined it max(s 0, ..., s t) = max(s 0, 0) will equal o s if o s > 0, else 0. However, there is no reason that the signal should vanish if the input is less than 0. Instead, we define m t to represent the signal left in our max aggregator. We define m t to be o s if ∀ 0<i≤t (o s ≥ o i) else 0. That is, if the signal "wins" the max so far, m t is o s = s, else 0, representing no signal. We define z t to be max 0<i≤t (o t) if ∃ 0<i≤t (o i > o s) else 0. In the continuous setting (weak setting with no repeats of any observation -signal or noise) we have: SN R(m t, z t) = which conveniently has no dependence on t and is very reasonable given a small observation space. In this section, we empirically measure the gradient under observation noise and empirically measure the SNR in the strong signal setting. We see that the gradient in the noisy setting is similar to the gradient in the non-noisy setting. We also see that when the signal is attenuated in the strong setting, the SNR of LSTM-based methods drops substantially. This is due to the SNR of the LSTMs being dependent on recency of the signal, and also in greater variance in the SNR. Our models, on the other hand, have a SNR with low variance and an SNR that is resilient to signal attenuation. Our models have a stronger dependence on the number of prior signals than on the recency of the signal. Learning Rate For learning rates, the best out of 5e-3, 5e-4, and 5e-5 are reported in each experiment, over 5 initialization each. Recurrent Architecture All LSTM sizes were size 256. DNC memory size is 16 (slots), word size 16 (16 floats), 4 read heads, 1 write head, and a 256-LSTM controller. Feed-forward Architecture We use ReLU activation. Our last feed-forward layer after the memory module is size 256 (F F 2). Before the memory module we have two feed forward layers both size 256 (F F 1). For models with image input, F F 1 consists of 2 convolutions layers: kernel with stride 2 and output channels 16; kernel with stride 2 and output-channels 32. Then this is flattened and a fully-connected layer of size 256 follows. Optimizer we use an adam optimizer , and a PPO agent . For training our PPO agent: mini-batch size 200, train batch size 4,000, num sgd iter 30, gamma.98. Software We used Ray version 0.6.2 . We used python 3.5.2 for all experiments not in the appendix, while python 3.5.5 was used for some earlier experiments in the appendix. Python 3.6.8 was used for some empirical analysis. The LSTM maintains its own hidden state from the previous time-step, h t−1, and outputs h t. We use the following definition of the LSTM : c t = tanh(W c,x x + W c,h h + b c) c t = c t−1 f t +ĉ t i t h t = tanh(c t) z t
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bkl7bREtDr
In Deep RL, order-invariant functions can be used in conjunction with standard memory modules to improve gradient decay and resilience to noise.
Optimization on manifold has been widely used in machine learning, to handle optimization problems with constraint. Most previous works focus on the case with a single manifold. However, in practice it is quite common that the optimization problem involves more than one constraints, (each constraint corresponding to one manifold). It is not clear in general how to optimize on multiple manifolds effectively and provably especially when the intersection of multiple manifolds is not a manifold or cannot be easily calculated. We propose a unified algorithm framework to handle the optimization on multiple manifolds. Specifically, we integrate information from multiple manifolds and move along an ensemble direction by viewing the information from each manifold as a drift and adding them together. We prove the convergence properties of the proposed algorithms. We also apply the algorithms into training neural network with batch normalization layers and achieve preferable empirical . Machine learning problem is often formulated as optimization problem. It is common that the optimization problem comes with multiple constraints due to practical scenarios or human prior knowledge that adding some of them help model achieve a better . One way to handle these constraints is adding regularization terms to the objective, such as the 1 and 2 regularization. However, it is hard to adjust the hyper-parameters of the regularization terms to guarantee that the original constraints get satisfied. Another way to deal with the constraints is to optimize on manifolds determined by the constraints. Then the optimization problem becomes unconstrained on the manifold, which could be easy to solve technically. Furthermore, optimization on manifold indicates optimizing on a more compact space, and may bring performance gain when training neural networks, e.g., BID10 BID3.Most previous works on manifold optimization focus on a single manifold BID13. However, in practice, we often face more than one constraints, each of them corresponding to one manifold. If we still solve the optimization problem with multiple constraints by method on manifold, we need to handle it on the intersection of multiple manifolds, which may no longer be a manifold BID11. Due to this, traditional optimization methods on manifold does not work in this case. In this paper, we consider the problem of optimization on multiple manifolds. Specifically, the problem is written as arg min DISPLAYFORM0 where each M i is a manifold. We propose a method solving this problem by choosing the moving direction as −∇f (x)(on manifold is −gradf (x)) with several drifts which are derived from the descent information on other manifolds. By this method, we get sequence that has information from all manifolds. There are several articles discussing the problem of optimization on manifold. Most of them focus on a single manifold. Readers can find a good summary about this topic and the advantages of op-timization on manifold in BID1. Recently, popular first order algorithms in Euclidean space are studied in the manifold setting, e.g., the convergence of gradient descent BID2, sub-gradient method BID5, stochastic variance reduction gradient (SVRG) and the gradient descent with momentum.Riemann approaches BID3 BID6 have also been applied to train deep neural network by noticing that the parameters of the neural network with batch normalization live on Grassmann manifold and Oblique manifold, respectively. This paper introduces an algorithm to deal with optimization with multiple manifolds. The algorithm adds drifts obtained from other manifolds to the moving direction, in order to incorporate the information from multiple manifolds during the optimization process. We prove the convergence of this algorithm under a very general framework. The proof is also applicable to the convergence of many other algorithms including gradient descent with momentum and gradient descent with regularization. Moreover, our proof does not depend on the choices of Retr x on the manifold. The specific definition of manifold M can be found in any topology book. For better understanding, we introduce several properties of manifold here. A manifold is a subspace of R n. For a given point x ∈ M, it has a tangent space T x M which is a linear space but M may not. For gradient descent method, the iterates are generated via DISPLAYFORM0 η is step length. However, the iterate generated by gradient descent may not on manifold anymore because manifold is not a linear space. To fix this, we introduce a retraction function Retr x (η): T x M → M to determine how point moves on manifold. Specifically, if M is R n, the Retr x becomes x + η. We can consider η in Retr x as the moving direction of the iterating point. Then, the gradient descent on manifold BID2 ) is given by DISPLAYFORM1 where gradf (x) is Riemannian gradient. Riemannian gradient is the orthogonal projection of gradient ∇f (x) to tangent space T x M as ∇f (x) may not in tangent space T x M and the moving direction on manifold is only decided by the vector in T x M. All of notations related to manifold can be referred to BID1 ).We next use a lemma to describe a property of the minimum point of the problem arg min x∈M f (x), which is a special case of , Corollary 4.2 and BID2, Proposition 1.Lemma 2.1 Let x be a local optimum for the optimization problem arg min x∈M f (x), which means there exists a neighborhood DISPLAYFORM2 We see that gradf (x) plays a role of ∇f (x) on manifold. Similar as BID2 discussed, we assume function has the property of Lipschtiz gradient. The definition of Lipschtiz gradient is Definition 2.1 (Lipschtiz gradient) For any two points x, y in the manifold M, f (x) satisfy: DISPLAYFORM3 Then we say that f satisfies the Lipschtiz gradient condition. We next introduce a condition that guarantees the convergence of iterative algorithms. Definition 2.2 (Descent condition) For a sequence {x k} and a k > 0, if DISPLAYFORM4 then we say the sequence satisfies the descent condition. First, we introduce a theorem to describe the convergence when the object function f is lower finite, i.e., there exists a f * such that f (x) ≥ f * > −∞ for all x, and the iterates satisfy descent condition. This theorem plays a key role in proof of the rest theorems. Theorem 2.1 If f is lower finite, and the iteration sequence {x k} satisfies the descent condition for any given {a k}, where each a k > 0. Then lim inf k→∞ a k gradf (x k) = 0 Proof 1 The proof is available in Supplemental. For better presentation, we first describe the algorithm under the circumstance of two manifolds. Considering the objective function f constrained on two manifolds M 1, M 2, we aim to find the minimum point on M 1 M 2. Since M 1 M 2 may not be a manifold, previous methods on manifold optimization cannot apply directly. We propose a method that integrates information from two manifolds over the optimization process. Specifically, we construct two sequences {x k}, {y k}, each on one manifold respectively. We add a drift which contains information from the other manifold to the original gradient descent on manifold (equation 2). The updating rules are DISPLAYFORM0 DISPLAYFORM1 If b k = 0 in (equation 3) and (equation 4), the updating rules reduce to normal gradient descent on manifold equation 2. The drift h k is in the tangent space T x M of each manifold, which represents information from the other manifold. We call this algorithm gradient descent on manifold with drifting, whose procedure is described in Algorithm 1.Algorithm 1 Gradient descent with drift on manifold DISPLAYFORM2 We next present the convergence theorem of this algorithm, which illustrates how we set a k and b k in the algorithm. Theorem 2.2 For function f (x) is lower finite, and Lipschtiz gradient. If we construct the sequence {x k} like equation FORMULA6, and for any 0 < δ < 2, we control δ ≤ a k ≤ 2. Setting DISPLAYFORM3 then x k convergence to a local minimizer. Proof 2 The proof is based on construction of the descent condition (equation 12) and is available in Supplemental. From the construction of b k, we can see that the smaller the correlation between gradf (x k) and h k is, the smaller effect the information from M 2 brings. In fact, we set h DISPLAYFORM4, where Px k is the projection matrix to tangent space DISPLAYFORM5 k which exchanges x k and Px k with y k and Py k (projection matrix of tangent space T y k M 2). The drift intuitively gives x k a force moving towards the minimizer on the other manifold. If the two manifolds are R n, then x k and y k are symmetry with each other. We have DISPLAYFORM6 If the equation system is stable and x 0, y 0 are mutually close, the distance between x k and y k will be small when k → ∞. By Schwarz inequality, we see b k ≤ 2(1 − a k). Since h k = gradf (x k), the scale of the drift is the same as the original Riemannian gradient. Hence, information from another manifold will not affect much, when the points x k and y k are close to a minimizer. We can control the contribution of the information from the other manifold by adjusting a k. For instance, a k = 1 indicates we do not integrate information from the other manifold. We can also prove the convergence rate of this algorithm. DISPLAYFORM7 Proof 3 The proof is delegated to Supplemental. Theorem 2.3 states the number of iterations we need to achieve a specific accuracy. Here we can adjust a k as long as δ < a k < 2. In this subsection, we describe our algorithm for the case with multiple (more than 2) manifolds. Suppose we have n manifolds, M 1, · · ·, M n, and sequence on manifold M i is denoted as {x DISPLAYFORM0 In the following, we use sequence {x k } on M 1 as an example, and other sequences on other manifolds can be derived accordingly. Let g DISPLAYFORM1 Then let the updating rule be DISPLAYFORM2 Since f satisfies Lipschtiz gradient condition(2.1), we have DISPLAYFORM3 We choose a DISPLAYFORM4 The way of choosing a DISPLAYFORM5 k ) ij, i, j from 2 to n, and α k = (a BID7 . It transforms the input value to a neuron from z = w T x to DISPLAYFORM6 DISPLAYFORM7 We can calculate the derivative as follows DISPLAYFORM8 For any a = 0, a ∈ R, we see that BN (w) = BN (aw) and DISPLAYFORM9. These equations mean that after a batch normalization, the scale of parameter has no relationship with the output value, but scale of gradient is opposite with the scale of parameter. BID3 have discussed that batch normalization could have an adverse effect in terms of optimization since there can be an infinite number of networks, with the same forward path but different scaling, which may converge to different local optima owing to different gradients. To avoid this phenomenon, we can eliminate the effect of scale by considering the weight w on the Grassmann manifold or Oblique manifold. On these two manifolds, we can ignore the scale of parameter. BID3; BID6 respectively discuss that BN (w) has same image space on G(1, n) and St(n, 1) as well as R n, where G(1, n) is a Grassmann manifold and St(n, 1) is an Oblique manifold. Due to these, we can consider applying optimization on manifold to batch normalization problem. However, the property of these two manifold implies that we can actually optimize on the intersection of two manifolds. Since optimization on a manifold rely on Riemannian gradient gradf (x) and Retr x, for a specific Retr x of Grassmann manifold G(1, n), we get a unit point x when η = −gradf (x) = 0 in formula. The condition gradf (x) = 0 means we obtain a unit critical point on Grassmann manifold which is also on Oblique manifold. The specific discussion of Grassmann manifold and Oblique manifold can be found in BID1. G(1, n) is a quotient manifold defined on a vector space, it regards vector with same direction as same element. For example and correspond to same element. We represent elements on G(1, n) with same direction by choosing one of them as representation element. Oblique manifold is given by St(n, p) = {X ∈ R n×p : ddiag(X T X) = I p }, where ddiag(·) is diagonal matrix of a matrix. We have discussed above that iteration point on G(1, n) would be a unit point when it's a local minimizer. Due to this, the local minimizer we find is actually live on the intersection of St(n, 1) and G(1, n). Hence, training neural network with batch normalized weights can be converted to the problem arg min DISPLAYFORM10 Let Riemannian gradient be projection of ∇f (x) to tangent space of x. On G(1, n), we have DISPLAYFORM11 gradf (x) = P DISPLAYFORM12 DISPLAYFORM0 On St(n, 1), we have P DISPLAYFORM1 x (η) = x + η x + η the P x is the projection matrix onto the tangent space at x. These can be derived from the general formulas from BID0 and BID4.In backward process of training neural network, weight parameter of each layer is a matrix. Hence, we get gradient to a matrix in every layer. To make calculation easier, we treat the gradient matrix and parameters matrix as vector. For example a m × n gradient matrix can be viewed as a m × n dimensional vector. Then we apply Algorithm 1 to update parameters, which means we optimize on a product manifold DISPLAYFORM2 k i is number of parameters for the i-th hidden layer, and n is number of hidden layers. We need to operate algorithm for parameter vector on each hidden layer. In other words, we update parameters layer by layer. In this section, we use data set CIFAR-10 and CIFAR-100 BID8 ) to test our algorithm. These two data sets are color images respectively have 10 and 100 classes, each of them has 50,000 training images and 10,000 test images. The deep neural network we used is WideResNet BID15, it output a vector which describe the probability of a data divided into each class. In every hidden layer of neural network, we apply batch normalization to weight parameters and treat them as a vector. We have already discussed that minimizers of a neural network with batch normalized weights live on the intersection of Grassmann manifolds and Oblique manifold. Hence, we can train neural network with batch normalized weights by our algorithm. The biases of every hidden layer is unrelated to batch normalization and are updated by SGD. For every training step, we calculate mean loss 1 S xi∈S l(f (x i, θ), y i ) of a mini batch to substitute the real loss function E x [l(f (x, θ), y)], where S is batch size. The process of algorithm on two manifolds follows Algorithm 1, where the two manifolds are G(1, n) and St(n, 1), respectively. In Algorithm 1, we choose DISPLAYFORM0 In the updating rules of x k and y k, we add a norm-clip to vectors (a DISPLAYFORM1 k). Then we times η to the two vectors, where η is the learning rate. In the experiments, we compare three methods: 1) stochastic gradient descent on manifold with drifting (Drift-SGDM), 2) stochastic gradient descent on manifold BID2 (SGDM), and 3) stochastic gradient descent (SGD). In Algorithm 1, we can get two sequences each corresponding to a model on a manifold. We predict output class by adding two output vectors of two models and choosing the biggest as prediction class. For Drift-SGDM (Algorithm 1), we set δ = 0.9 and initial learning rate η m = 0.4 for weights parameters which is multiplied by 0.4 at 60, 120, and 160 epochs. Initial learning rate η for biases is 0.01 which is multiplied by 0.4 at 60, 120, and 160 epochs. Norm clip is 0.1. Training batch size is 128. The number of training epochs is 200.For SGDM, we choose a = 1 in Algorithm 1. The other settings are the same as Drift-SGDM. That a = 1 in Algorithm 1 means that SGDM optimizes on each manifold individually We set SGD as baseline. The learning rate is 0.2 which is multiplied by 0.2 at epoch 60,120 and 160. Weight decay is set as 0.0005, but we do not apply weight decay for algorithms on manifold. All other settings are the same as the above two algorithms. About Drift-SGDM and SGDM, the loss is achieved from the average of two model. The parameter scale of the two model can be different, because they respectively live on Grassmann manifold and Oblique manifold. Due to this, the comparing between Drift-SGDM and SGDM is more reasonable. We also give the accuracy curve and a tubular of accuracy rate on test sets to validate our algorithms. We see that our algorithm perform better on larger neural network. Our algorithm does not have regularization term, and it does not perform well in the aspect of generalization. We can actually add a regularization term like in BID3 ) to achieve better generalization. We choose δ in Algorithm 1 as 0.9. Since b DISPLAYFORM2 k ) where i = 1, 2 as we have discussed in section 2, we see drift term b DISPLAYFORM3 k in Algorithm 1 doesn't affect much to iteration point. We can actually set a smaller δ to enhance the influence of drift term b DISPLAYFORM4 In this paper, we derive an intuitively method to approach optimization problem with multiple constraints which corresponds to optimizing on the intersection of multiple manifolds. Specifically, the method is integrating information among all manifolds to determine minimum points on each manifold. We don't add extra conditions to constraints of optimization problem, as long as each constraint can be converted to a manifold. In the future, we may add some conditions to manifolds which derive a that minimum points on each manifold achieved by our algorithm are close with other. If this is established, the problem of optimization on intersection of multiple manifolds is solved. According to the updating rule (equation 3), we can derive many other algorithms, because the drift h k in (equation 3) is flexible. On the other hand, Retr x on our algorithm does not limit to a specific one. Since there are some for Retr x = Exp x, for example Corollary 8 in, we may get more elegant by using Exp x as retraction function in our algorithm. The manifolds we encounter in optimization are mainly embedded sub-manifold and quotient manifold BID1. Embedded sub-manifold is F −1 (y) for a smooth function F: M 1 → M 2, where M 1, M 2 are two manifolds and y ∈ M 2. Quotient manifold is a quotient topology space generalized by a specific equivalence relationship ∼. In this paper, we use Oblique manifold and Grassmann manifold which are embedded sub-manifold and quotient manifold respectively. The difficulty we faced in optimization on manifold is calculating tangent space T x M and Riemannian gradient gradf (x). Giving a exact formula of a tangent space T x M is not a easy problem. On the other hand, since Riemannian gradient is ∇f (x) projected to a tangent space T x M, finding projection matrix to a specific space T x M is nontrivial. In this section, we study the frame work of gradient descent with drift. In a special case, we regard R n as a manifold. Then, Rienmann gradient gradf (x) = ∇f (x), tangent space T x M = R n and Retr x (η) = x + η. In Algorithm, we set DISPLAYFORM0 Then we have DISPLAYFORM1 which is exactly a kind of gradient descent with momentum. And this algorithm is convergence as we proved. On the other hand, if choosing h k as gradient of a regularization term R(x) on x k. For example, h k becomes 2x k when R(x) = x 2. The iteration point in Algorithm FORMULA0 is achieved by gradient descent with regularization term. The drift in (equation 3) we have discussed is non-stochastic. But actually, we can change the drift as a stochastic term to construct a non-descent algorithm. Meanwhile, stochastic drift gives iteration sequence ability of jumping from local minimizer. The update rule is DISPLAYFORM0 where ξ k is a random vector with mean vector µ, covariance matrix Σ. The process of this algorithm is Algorithm 2.Algorithm 2 Non-descent method with stochastic noise Input 0 < δ < 2, x 0 ∈ M, Retr x, ε > 0 k → 1 while gradf (x) > ε do Sample ξ k with mean vector µ and covariance matrix DISPLAYFORM1 We give convergence theorem of Algorithm 2. The proof implies that this algorithm is non-descent, it also shows how we set a k and b k.Theorem A.1 For function f (x) ≥ f * > −∞, and Lipschtiz gradient. If we construct the sequence DISPLAYFORM2 where 0 < δ < 2, we have lim inf k→∞ gradf (x k) 2 = 0.In this theorem, b k control the speed of back fire. The noise ξ k in (equation 10) has small effect to iteration process when k is large, because sequence is about to be stable after enough iterations. But in beginning of iteration procedure, noise ξ k effects much which give iteration sequence ability of jumping from local minimizer. In this section, we give proof of theorems in this paper. The proof of Theorem 2.1 is Proof 4 (proof of Theorem 2.1) According to definition 2.2 of descent condition, we have DISPLAYFORM0 for any k. Since f is lower finite, we have DISPLAYFORM1 Proof 5 (proof of Theorem 2.2) Since f satisfy Lischtiz gradient(2.1), we have DISPLAYFORM0 By the definition of b k, we got DISPLAYFORM1 Before proof Theorem A.1, we need two lemmas. Lemma B.1 A random vector with Ex = µ and Covx = Σ. Then for any symmetric matrix A, we have E(x T Ax) = µ T Aµ + tr(AΣ).This lemma can be derived from BID12 Here we use Rayleigh theorem of theorem 2.4. BID12 Proof 8 (proof of Theorem A.1) Since P x k is a projection matrix, which is a symmetric idempotent matrix. Because f satisfies Lipschtiz gradient(2.1), we have DISPLAYFORM2 where ε i = (P xi gradf (x i)) T ξ i = gradf (x i)) T ξ i, η i = ξ T i P T xi P xi ξ i = ξ T i P xi ξ i. Due to the two random variables, algorithm is not necessary descent. Σ is a symmetric positive definite matrix. By Schwarz equality and definition of a i, we have DISPLAYFORM3 By Fatou's lemma, we have DISPLAYFORM4
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJerDj05tQ
This paper introduces an algorithm to handle optimization problem with multiple constraints under vision of manifold.
It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned. In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces. Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time. Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions. With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces (approximately). On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG. We apply the technique to off-policy (Q-learning) methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks. Reinforcement learning has long been considered as a general framework applicable to a broad range of problems. However, the approaches used to tackle discrete and continuous action spaces have been fundamentally different. In discrete domains, algorithms such as Q-learning leverage backups through Bellman equations and dynamic programming to solve problems effectively. These strategies have led to the use of deep neural networks to learn policies and value functions that can achieve superhuman accuracy in several games (; where actions lie in discrete domains. This success spurred the development of RL techniques that use deep neural networks for continuous control problems BID12 BID20 . The gains in these domains, however, have not been as outsized as they have been for discrete action domains. This disparity is, in part, a of the inherent difficulty in maximizing an arbitrary function on a continuous domain, even in low-dimensional settings. Furthermore, it becomes harder to apply dynamic programming methods to back up value function estimates from successor states to parent states in continuous control problems. Several of the recent continuous control reinforcement learning approaches attempt to borrow characteristics from discrete problems by proposing models that allow maximization and backups more easily BID12 .One way in which continuous control can avail itself of the above advantages is to discretize each of the dimensions of continuous control action spaces. As noted in, doing this naively, however, would create an exponentially large discrete space of actions. For example with M dimensions being discretized into N bins, the problem would balloon to a discrete space with M N possible actions. We leverage the recent success of sequence-to-sequence type models BID32 to train such discretized models, without falling into the trap of requiring an exponentially large number of actions. Our method relies on a technique that was first introduced in BID3, which allows us to escape the curse of dimensionality in high dimensional spaces by modeling complicated probability distributions using the chain rule decomposition. In this paper, we similarly parameterize functions of interest -Q-values -using a decomposition of the joint function into a sequence of conditional values tied together with the bellman operator. With this formulation, we are able to achieve fine-grained discretization of individual domains, without an explosion in the number of parameters; at the same time we can model arbitrarily complex distributions while maintaining the ability to perform (approximate) global maximization. These benefits come at the cost of shifting the exponentially complex action space into an exponentially complex MDP BID5 BID10. In many settings, however, there are relationships between transitions that can be leveraged and large regions of good solutions, which means that this exponential space need not be fully explored. Existing work using neural networks to perform approximate exponential search is evidence of this BID37; BID2.While this strategy can be applied to most function approximation settings in RL, we focus on off-policy settings with an algorithm akin to DQN. Empirical on an illustrative multimodal problem demonstrates how our model is able to perform global maximization, avoiding the exploration problems faced by algorithms like NAF BID13 and DDPG. We also show the effectiveness of our method on a range of benchmark continuous control problems from hopper to humanoid. In this paper, we introduce the idea of building continuous control algorithms utilizing sequential, or autoregressive, models that predict over action spaces one dimension at a time. Here, we use discrete distributions over each dimension (achieved by discretizing each continuous dimension into bins) and apply it using off-policy learning. We briefly describe the notation we use in this paper. Let s t ∈ R L be the observed state of the agent, a ∈ R N be the N dimensional action space, and E be the stochastic environment in which the agent operates. Finally, let a i:j = a i · · · a j T be the vector obtained by taking the sub-range/slice of a DISPLAYFORM0 At each step t, the agent takes an action a t, receives a reward r t from the environment and transitions stochastically to a new state s t+1 according to (possibly unknown) dynamics p E (s t+1 |s t, a t). An episode consists of a sequence of such steps (s t, a t, r t, s t+1), with t = 1 · · · H where H is the last time step. An episode terminates when a stopping criterion F (s t+1) is true (for example when a game is lost, or when the number of steps is greater than some threshold length H max).Let R t = H i=t γ i−1 r i be the discounted reward received by the agent starting at step t of an episode. As with standard reinforcement learning, the goal of our agent is to learn a policy π (s t) that maximizes the expected future reward E [R H] it would receive from the environment by following this policy. Because this paper is focused on off-policy learning with Q-Learning BID39, we will provide a brief description of the algorithm. Q-learning is an off-policy algorithm that learns an action-value function Q (s, a) and a corresponding greedy-policy, π Q (s) = argmax a Q (s, a). The model is trained by finding the fixed point of the Bellman operator, i.e. DISPLAYFORM0 This is done by minimizing the Bellman Error, over the exploration distribution, ρ β (s) DISPLAYFORM1 Traditionally, Q is represented as a table of state action pairs or with linear function approximators or shallow neural networks BID39 BID35. Recently, there has been an effort to apply these techniques to more complex domains using non-linear function approximators that are deep neural networks (; . In these models, a Deep Q-Network (DQN) parameterized by parameters, θ, is used to predict Q-values, i.e. Q(s, a) = f (s, a; θ). The DQN parameters, θ, are trained by performing gradient descent on the error in equation 2, without taking a gradient through the Q-values of the successor states (although, see BID1 for an approach that takes this into account).Since the greedy policy, π Q (s), uses the action value with the maximum Q-value, it is essential that any parametric form of Q be able to find a maxima easily with respect to actions. For a DQN where the output layer predicts the Q-values for each of the discrete outputs, it is easy to find this max -it is simply the action corresponding to the index of the output with the highest estimated Q-value. In continuous action problems, it can be tricky to formulate a parametric form of the Q-value where it is easy to find such a maxima. Existing techniques either use a restrictive functional form, such as NAF BID13. DDPG employs a second neural network to approximate this max, in addition to the Q function approximator. This second network is trained to maximize / ascend the Q function as follows: DISPLAYFORM2 where ρ β is the state distribution explored by some behavioral policy, β and µ(·; θ µ) is the deterministic policy. In this work we modify the form of our Q-value function while still retaining the ability to find local maxima over actions for use in a greedy policy. In this section, we outline our proposed model, Sequential DQN (SDQN). This model decomposes the original MDP model with N -D actions to a similar MDP which contains sequences of 1-D actions. By doing this, we have 2 layer hierarchy of MDP -the "upper" containing the original environment, and the "lower" containing the transformed, stretched out, MDP. Both MDP model the same environment. We then combine these MDP by noting equality of Q values at certain states and doing bellman backups against this equality. See figure 1 for a pictorial view of this hierarchy. Consider an environment with states s t and actions a ∈ R N. We can perform a transformation to this environment into a similar environment replacing each N -D action into a sequence of N 1-D actions. This introduces a new MDP consisting of states u st k where superscript denotes alignment to the state s t, above, and subscript k to denote time offset on the lower MDP from s t. As a , u st k = (s t, a 1:k) is a tuple containing the state s t from original MDP and a history of additional states in the new MDP -in our case, a concatenation of actions a 1:k previously selected. The transitions of this new MDP can be defined by two rules: when all 1-D actions are taken we compute 1 step in the N -D environment receiving a new state, s t+1, a reward r t, and resetting a. In all other transitions, we append the previously selected action in a and receive 0 reward. This transformation reduces the N -D actions to a series of 1-D actions. We can now discretize the 1-D output space and directly apply Q-learning. Note that we could apply this strategy to continuous values, without discretization, by choosing a conditional distribution, such as a mixture of 1-D Gaussians, over which a maxima can easily be found. As such, this approach is equally applicable to pure continuous domains as compared to discrete approximations. The downside to this transformation is that it increases the number of steps needed to solve the transformed MDP. In practice, this transformation makes learning a Q-function considerably harder. The extra steps of dynamic programming coupled with learned function approximators causes large overestimation and stability issues. This can be avoided by learning Q-values for both MDPs at the same time and performing the bellman backup from the lower to the upper MDP for the transitions, s t, where Q-values should be equal. We define Q U (s, a) ∈ R, a ∈ R N, as a function that is the Q-value for the top MDP. Next, we define Q L (u, a i) ∈ R where a i ∈ R as the Q value for the lower MDP. We would like to have consistent Q-values across both MDPs when they perform one step in the environment. To make this possible, we must define how the time discounting works. We define the lower MDP to have zero discount for all steps except for when the real environment changes state. Thus, the discount is 0 for all all u st k where k < N, and the same as the top MDP when k = N. By doing this, the following is then true: DISPLAYFORM0 where u st N −1 contains the upper state and the previous N − 1 actions: (s t, a t 1:N −1). This equality allows us to "short circuit" the backups. Backups are only needed up until the point where the upper MDP can be used improving training and stability. During training, we parameterize Q U and Q L as neural networks. We learn Q U via TD-0 learning by minimizing: DISPLAYFORM1 Next, we learn Q L by also doing Q-learning, but we make use of of the equality noted in equation 3 and zero discounting. There is no new information nor environment dynamics for this MDP. As such we can draw samples from the same replay buffer used for learning Q U. For states u st k where k < N we minimize the bellman error as follows: DISPLAYFORM2 When Q U and Q L should be equal, as defined in equation 3, we do not backup we instead enforce soft equality by MSE. DISPLAYFORM3 In practice, as in DQN, we can also make use of target networks and/or double DQN BID15 when training Q U and Q L for increased stability. When using this model as a policy we compute the argmax over each action dimension of the lower MDP. As with DQN, we employ exploration when training with either epsilon greedy exploration or Boltzmann exploration. Q U is a MLP whose inputs are state and actions and outputs are Q values. Unlike in DDPG, the loss function does not need to be smooth with respect to actions. As such, we also feed in discretized representation of each action dimension to make training simpler. We worked with two parameterizations for Q L. First, we looked at a recurrent LSTM model BID17. This model has shared weights and passes information via hidden activations from one action dimension to another. The input at each time step is a function of the current state from the upper MDP, s t, and a single action dimension, a i. As it's an LSTM, the hidden state is capable of accumulating the previous actions. Second, we looked at a version with separate weights for each step of the lower MDP. The lower MDP does not have a fixed size input as the amount of action dimensions it has as inputs varies. To combat this, we use N separate models that are switched between depending on the state index of the lower MDP. We call these distinct models Q i where i ∈ [1, N]. These models are feed forward neural networks that take as input a concatenation of all previous action selections, a 1 t: i, as well as the upper state, s t. Doing this in switching Q L with the respective Q i in every use. Empirically we found that this weight separation led to more stable training. In more complex domains, such as vision based control tasks for example, one should untie only a subset of the weights and keep common components -a vision system -the same. In practice, we found that for the simplistic domains we worked in fully untied weights was sufficient. Architecture exploration for these kinds of models is still ongoing work. For full detail of model architectures and training procedures selection see Appendix C.3 RELATED WORK Our work was motivated by two distinct desires -to learn policies over exponentially large discrete action spaces, and to approximate value functions over high dimensional continuous action spaces effectively. In our paper we used a sequential parameterization of policies that help us to achieve this without making an assumption about the actual functional form of the model. Other prior work attempts to handle high dimensional action spaces by assuming specific decompositions. For example, BID26 were able to scale up learning to extremely large action sizes by factoring the action value function and use product of experts to learn policies. An alternative strategy was proposed in using action embeddings and applying k-nearest neighbors to reduce scaling of action sizes. By laying out actions on a hypercube, BID23 are able to perform a binary search over actions ing in a logarithmic search for the optimal action. Their method is similar to SDQN, as both construct a Q-value from sub Q-values. Their approach presupposes these constraints, however, and optimizes the Bellman equation by optimizing hyperplanes independently thus enabling optimizing via linear programming. Our approach is iterative and refines the action selection, which contrasts to their independent sub-plane maximization. and BID22 proposes a transformation similar to ours where a continuous action MDP is converted to a sequence of transitions representing a binary search over the continuous actions. In our setting, we used a 2-layer hierarchy of variable width as opposed to a binary tree. Additionally, we used the original MDP as part of our training procedure to reduce estimation error. We found this to be critical to reduce overestimation error when working with function approximators. Along with the development of discrete space algorithms, researchers have innovated specialized solutions to learn over continuous state and action environments including BID30 BID13. More recently, novel deep RL approaches have been developed for continuous state and action problems. TRPO BID27 and A3C uses a stocastic policy parameterized by diagonal covariance Gaussian distributions. NAF BID13 relies on quadratic advantage function enabling closed form optimization of the optimal action. Other methods structure the network in a way such that they are convex in the actions while being non-convex with respect to states BID0 or use a linear policy BID24.In the context of reinforcement learning, sequential or autoregressive policies have previously been used to describe exponentially large action spaces such as the space of neural architectures, BID40 and over sequences of words BID2 BID29. These approaches rely on policy gradient methods whereas we explore off-policy methods. Hierarchical/options based methods, including BID9 ) which perform spatial abstraction or BID34 that perform temporal abstraction pose another way to factor action spaces. These methods refine their action selection from time where our approaches operates on the same timescale and factors the action space. A vast literature on constructing sequential models to solve tasks exists outside of RL. These models are a natural fit when the data is generated in a sequential process such as in language modeling This in more sample efficiency but yields poor approximations of the Q surface outside of where the policy is. Right: Reward achieved over time. DDPG quickly converges to a local maximum. SDQN has high variance performance initially as it searches the space, but then quickly converges to the global maximum as the Q surface estimate becomes more accurate. BID4. One of the first and most effective deep learned sequence-to-sequence models for language modeling was proposed in BID33, which used an encoder-decoder architecture. In other domains, techniques such as NADE BID19 To consider the effectiveness of our algorithm, we consider a deterministic environment with a single time step, and a 2D action space. This can be thought of as being a two-armed bandit problem with deterministic rewards, or as a search problem in 2D action space. We chose our reward function to be a multimodal distribution as shown in the first column in FIG2. A large suboptimal mode and a smaller optimal mode exist. As with bandit problems, this formulation helps us isolate the ability of our method to find an optimal policy, without the confounding effect that arises from backing up rewards via the Bellman operator for sequential problems. As in traditional RL, we do exploration while learning. We consider uniformly sampling (-greedy with = 1) as well as sampling data from a normal distribution centered at the current policy -we refer to this as "local." A visualization of the final Q surfaces as well as training curves can be found in FIG2. DDPG uses local optimization to learn a policy on a constantly changing estimate of Q values predicted by a critic. The form of the Q distribution is flexible and as such there is no closed form properties we can make use of for learning a policy. As such, gradient descent, a local optimization algorithm, is used. This algorithm can get stuck in a sub-optimal policy. We hypothesize that these local maximum in policy space exist in more realistic simulated environments as well. Traditionally, deep learning methods use local optimizers and avoid local minima or maxima by working in a high dimensional parameter space BID8. In RL, however, the action space of a policy is relatively small dimensional thus it is much more likely that they exist. For example, in the hopper environment, a common failure mode we experienced when training algorithms like DDPG is to learn to balance instead of moving forward and hopping. We contrast this to SDQN. As expected, this model is capable of completely representing the Q surface (under the limits of discretization). The optimization of the policy is not done locally however enabling convergence to the optimal policy. Much like DDPG, the Q surface learned can be done on uniform, off policy, data. Unlike DDPG, however, the policy will not get stuck in a local maximum. In the uniform behavior policy setting, the model slowly reaches the right solution.1 With a behavior policy that is closer to being on-policy (such as the stochastic Gaussian greedy policy referred to above), the rate of convergence increases. Much of the error occurs from selecting over estimated actions. When sampling more on policy, the over estimated data points get sampled more frequently ing in faster training. To evaluate the relative performance of these models we perform a series of experiments on common continuous control tasks. We test the hopper (3-D action space), swimmer (2-D action space), half cheetah (6-D action space), walker2d (6-D action space) and the humanoid environment (17-D action space) from the OpenAI gym suite BID6. 2 We performed a wide hyper parameter search over various parameters in our models (described in Appendix C), and selected the best performing runs. We then ran 10 random seeds of the same hyper parameters to evaluate consistency and to get a more realistic estimate of performance. We believe this replication is necessary as many of these algorithms are not only sensitive to both hyper parameters but random seeds. First, we look at learning curves of some of the environments tested in FIG4. Our method quickly achieves good policies much faster than DDPG. For a more qualitative analysis, we use the best reward achieved while training averaged across over 25,000 steps and with evaluations sampled every 5,000 steps. Again we perform an average over 10 different random seeds. This metric gives a much better sense of stability than the traditionally reported instantaneous max reward achieved during training. We compare our algorithm to the current state-of-the-art in off-policy continuous control: DDPG. Through careful model selection and extensive hyper parameter tuning, we train DDPG agents with performance better than previously published for some of these tasks. Despite this search, however, we believe that there is still space for significant performance gain for all the models given different neural network architectures and hyper parameters. See for discussion on implementation variability and performance. Results can be seen in Figure 4. Our algorithm achieves better performance on four of the five environments we tested. Unlike existing continuous control algorithms, we have a choice over the number of discritization bins we choose, B. To test the effect of this we first take the best performing half cheetah hyper parameter configuration found above, and rerun it varying the number of bins. For statistical significance we run 10 trials per tested value of B. Results can be found in Figure 5. These suggest that SDQN is robust to this hyper parameter, working well in all bin amounts greater than 4. Lower than 4 bins does not yield enough fine grain enough control to solve this task effectively. Next we look to the effect of action order. In most existing environments there is no implicit "ordering" to environments. Given the small action space dimensionality, we hypothesized that this ordering would not matter. We test this hypothesis by taking our hyper parameters for half cheetah found in section 4.2 and reran them with random action ordering, 10 times for statistical significance. Half cheetah has 6 action dimensions thus we are only exploring a subset of orderings. Seed 0 represents the ordering used when finding the original hyper parameters. Results can be found in Figure 5. While there is some variability, the overall changes in performance are small validating our original assumption. Conceptually, our approach centers on the idea that action selection at each stage can be factored and sequentially selected. In this work we use 1-D action spaces that are discretized as our base component. Existing work in the image modeling domain suggests that using a mixture of logistic units BID25 greatly speeds up training and would also satisfy our need for a closed form max. Additionally, this work imposes a prespecified ordering of actions which may negatively impact training for certain classes of problems (with much larger number of action dimensions). To address this, we could learn to factor the action space into the sequential order for continuous action spaces or learn to group action sets for discrete action spaces. Another promising direction is to combine this approximate max action with gradient based optimization procedure. This would relieve some of the complexity of the modeling task of the maxing network, at the cost of increased compute when sampling from the policy. Finally, the work presented here is exclusively on off-policy methods. We chose to focus on these methods due to their sample efficiency. Use of an sequential policies with discretized actions could also be used as the policy for any stochastic policy optimization algorithm such as TRPO BID27 or A3C . In this work we present a continuous control algorithm that utilize discretized action spaces and sequential models. The technique we propose is an off-policy RL algorithm that utilizes sequential prediction and discretization. We decompose our model into a hierarchy of Q function. The effectiveness of our method is demonstrated on illustrative and benchmark tasks, as well as on more complex continuous control tasks. Sampling an Action To gain insight into the characteristics of Q that our SDQN algorithm learns, we visualized from the hopper environment as it is complex but has a small dimensional action space. Figure B.3: Exploration of the sub-DQN during after training. The top row shows the Q i predictions for a given frame (action dimensions correspond to the joint starting at the top and moving toward the bottom -action 3 is the ankle joint). The bottom row shows the corresponding rendering of the current state. For insensitive parts of the gait, such as when the hopper is in the air (e.g. frame 430, 442, 490, 502), the network learns to be agnostic to the choice of actions; this is reflected in the flat Q-value distribution, viewed as a function of action index. On the other hand, for critical parts of the gait, such as when the hopper is in contact with the ground (e.g. frames 446, 478), the network learns that certain actions are much better than others, and the Q-distribution is no longer a flat line. This reflects the fact that taking wrong actions in these regimes could lead to bad such as tripping, yielding a lower reward. First we compute each action dimension's Q distribution, Q L / Q i, and compare those distributions to that of the top MDP for the full action dimentionality, Q U. A figure containing these distributions and corresponding state visualization can be found in Figure B.3.For most states in the hopper walk cycle, the Q distribution is very flat. This implies that small changes in the action taken in a state will have little impact on future reward. This makes sense as the system can recover from any action taken in this frame. However, this is not true for all statescertain critical states exist, such as when the hopper is pushing off, where not selecting the correct action value greatly degrades performance. This can be seen in frame 466.Our algorithm is trained with a number of soft constraints. First, if fully converged, we would expect Q i−1 >= Q i as every new sub-action taken should maintain or improve the expected future discounted reward. Additionally, we would expect Q N (s, a) = Q U (s, a) (from equation 6). In the majority of frames these properties seem correct, but there is certainly room for improvement. Next, we attempt to look at Q surfaces in a more global manner. We plot 2D cross sections for each pair of actions and assume the third dimension is zero. Figure B.4 shows the . As seen in the previous visualization, the surface of both the sequential Q surface and the Q U is not smooth, which is expected as the environment action space for Hopper is expected to be highly non-linear. Some regions of the surface seem quite noisy which is not expected. Interestingly though, these regions of noise do not seem to lower the performance of the final policy. In Q-learning, only the maximum Q value regions have any impact on the taken policy. Future work is needed to better characterize this effect. We would like to explore techniques that use "soft"; BID28; BID14. These techniques will use more of the Q surface thus smooth the representations. Additionally, we notice that the dimensions of the autoregressive model are modeled differently. The last action, a 3 has considerably more noise than the previous two action dimensions. This large difference in the smoothness and shape of the surfaces demonstrates that the order of the actions dimensions matters. This figure suggests that the model has a harder time learning sharp features in the a 1 dimension. In future work, we would like to explore learned orderings, or bidirectional models, to combat this. Finally, the form of Q U is extremely noisy and has many cube artifacts. The input of this function is both a one hot quantized action, as well as the floating point representation. It appears the model uses the quantization as its main feature and learns a sharp Q surface. The most sensitive hyper parameters were the learning rate of the two networks, reward scale, and finally, discount factor. Parameters such as batch size, quantization amounts, and network sizes mattered to a lesser extent. We found it best to have exploration be done on less than 10% of the transitions. We didn't see any particular value below this that gave better performance. In our experiments, the structure of model used also played a large impact in performance, such as, using tied versus untied weights for each sequential prediction model. In future work, we would like to lower the amount of hyper parameters needed in these algorithms and study the effects of various hyper parameters more thoroughly. In this model, we looked at a number of configurations. Of the greatest importance is the form of the model itself. We looked at an LSTM parameterization as well as an untied weight variant. The untied weight variant's parameter search is listed below. To compute Q i we first take the state and previous actions and do one fully connected layer of size "embedding size". We then concatenate these representations and feed them into a 2 hidden layer MLP with "hidden size" units. The output of this network is "quantization bins" wide with no activation. return a
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1SuFjkRW
A method to do Q-learning on continuous action spaces by predicting a sequence of discretized 1-D actions.
Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments. However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments. This mismatching has seriously impacted the sample complexity of MBRL. The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts. Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN. We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one. Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return. Reinforcement learning (RL) has become of great interest because plenty of real-world problems can be modeled as a sequential decision-making problem. Model-free reinforcement learning (MFRL) is favored by its capability of learning complex tasks when interactions with environments are cheap. However, in the majority of real-world problems, such as autonomous driving, interactions are extremely costly, thus MFRL becomes infeasible. One critique about MFRL is that it does not fully exploit past queries over the environment, and this motivates us to consider the model-based reinforcement learning (MBRL). In addition to learning an agent policy, MBRL also uses the queries to learn the dynamics of the environment that our agent is interacting with. If the learned dynamic is accurate enough, the agent can acquire the desired skill by simply interacting with the simulated environment, so that the number of samples to collect in the real world can be greatly reduced. As a , MBRL has become one of the possible solutions to reduce the number of samples required to learn an optimal policy. Most previous works of MBRL adopt supervised learning with 2 -based errors (; ; or maximum likelihood , to obtain an environment model that synthesizes real transitions. These non-trivial developments imply that optimizing a policy on a synthesized environment is a challenging task. Because the estimation error of the model accumulates as the trajectory grows, it is hard to train a policy on a long synthesized trajectory. On the other hand, training on short trajectories makes the policy short-sighted. This issue is known as the planning horizon dilemma . As a , despite having a strong intuition at first sight, MBRL has to be designed meticulously. Intuitively, we would like to learn a transition model in a way that it can reproduce the trajectories that have been generated in the real world. Since the attained trajectories are sampled according to a certain policy, directly employing supervised learning may not necessarily lead to the mentioned especially when the policy is stochastic. The resemblance in trajectories matters because we estimate policy gradient by generating rollouts; however, the one-step model learning adopted by many MBRL methods do not guarantee this. Some previous works propose multi-step training (; ;); however, experiments show that model learning fails to benefit much from the multi-step loss. We attribute this outcome to the essence of super- Supervised learning Rollout from real world Figure 1: Distribution matching enables the learned transition to generate similar rollouts to the real ones even when the policy is stochastic or the initial states are close. On the other hand, training with supervised learning does not ensure rollout similarity and the ing policy gradient may be inaccurate. This figure considers a fixed policy sampling in the real world and a transition model. vised learning, which elementally preserves only one-step transition and the similarity between real trajectories and the synthesized ones cannot be guaranteed. In this work, we propose to learn the transition model via distribution matching. Specifically, we use WGAN to match the distributions of state-action-next-state triple (s, a, s) in real/learned models so that the agent policy can generate similar trajectories when interacting with either the true transition or the learned transition. Figure 1 illustrates the difference between methods based on supervised learning and distribution matching. Different from the ensemble methods proposed in previous works, our method is capable of generalizing to unseen transitions with only one dynamic model because merely incorporating multiple models does not alter the essence that one-step (or few-step) supervised learning fails to imitate the distribution of multi-step rollouts. Concretely, we gather some transitions in the real world according to a policy. To learn the real transition T, we then sample fake transitions from our synthesized model T with the same policy. The synthesized model serves as the generator in the WGAN framework and there is a critic that discriminates the two transition data. We update the generator and the critic alternatively until the synthesized data cannot be distinguished from the real one, which we will show later that it gives T → T theoretically. Our contributions are summarized below: • We propose an MBRL method called model imitation (MI), which enforces the learned transition model to generate similar rollouts to the real one so that policy gradient is accurate; • We theoretically show that the transition can be learned by MI in the sense that T → T by consistency and the difference in cumulative rewards can be bounded by the training error of WGAN; • To stabilize model learning, we deduce guarantee for our sampling technique and investigate training across WGANs; • We experimentally show that MI is more sample efficient than state-of-the-art MBRL and MFRL methods and outperforms them on four standard tasks. In this section, we introduce our motivation inspired by learning from demonstration (LfD) and give a brief survey of MBRL methods. A straightforward approach to LfD is to leverage behavior cloning (BC), which reduces LfD to a supervised learning problem. Even though learning a policy via BC is time-efficient, it cannot imitate a policy without sufficient demonstration because the error may accumulate without the guidance of expert . Generative Adversarial Imitation Learning (GAIL) is another state-of-the-art LfD method that learns an optimal policy by utilizing generative adversarial training to match occupancy measure . GAIL learns an optimal policy by matching the distribution of the trajectories generated from an agent policy with the distribution of the given demonstration. shows that the two distributions match if and only if the agent has learned the optimal policy. One of the advantages of GAIL is that it only requires a small amount of demonstration data to obtain an optimal policy but it requires a considerable number of interactions with environments for the generative adversarial training to converge. Our intuition is that transition learning (TL) is similar to learning from demonstration (LfD) by exchanging the roles of transition and policy. In LfD, trajectories sampled from a fixed transition are given, and the goal is to learn a policy. On the other hand, in TL, trajectories sampled from a fixed policy are given, and we would like to imitate the underlying transition. That being said, from LfD to TL, we interchange the roles of the policy and the transition. It is therefore tempting to study the counterpart of GAIL in TL; i.e., learning the transition by distribution matching. Fortunately, by doing so, the pros of GAIL remain while the cons are insubstantial in MBRL because sampling with the learned model is considered to be much cheaper than sampling in the real one. That GAIL learns a better policy than what BC does suggests that distribution matching possesses the potential to learn a better transition than supervised learning. For deterministic transition, @ -based error is usually utilized to learn the transition model. , an approach that uses supervised learning with mean-squared error as its objective, is shown to perform well under fine-tuning. To alleviate model bias, some previous works adopt ensembles , where multiple transition models with different initialization are trained at the same time. In a slightly more complicated manner, utilizes meta-learning to gather information from multiple models. Lastly, on the theoretical side, SLBO is the first algorithm that develops from solid theoretical properties for model-based deep RL via a joint model-policy optimization framework. For the stochastic transition, maximum likelihood estimator or moment matching are natural ways to learn a synthesized transition, which is usually modeled by the Gaussian distribution. Following this idea, Gaussian process and Gaussian process with model predictive control are introduced as an uncertainty-aware version of MBRL. Similar to the deterministic case, to mitigate model bias and foster stability, an ensemble method for probabilistic networks is also studied. An important distinction between training a deterministic or stochastic transition is that although the stochastic transition can model the noise hidden within the real world, the stochastic model may also induce instability if the true transition is deterministic. This is a potential reason why an ensemble of models is adopted to reduce variance. We consider the standard Markov Decision Process (MDP) . MDP is represented by a tuple S, A, T, r, γ, where S is the state space, A is the action space, T (s t+1 |s t, a t) is the transition density of state s t+1 at time step t + 1 given action a t made under state s t, r(s, a) is the reward function, and γ ∈ is the discount factor. A stochastic policy π(a|s) is a density of action a given state s. Let the initial state distribution be α. The performance of the triple (α, π, T) is evaluated in the expectation of the cumulative reward in the γ-discounted infinite horizon setting: Equivalently, R(α, π, T) is the expected cumulative rewards in a length-H trajectory {s t, a t} H−1 t=0 generated by (α, π, T) with H ∼ Geometric(1 − γ). When α and T are fixed, R(·) becomes a function that only depends on π, and reinforcement learning algorithms aim to find a policy π to maximize R(π). Given initial state distribution α(s), policy π(a|s) and transition T (s |s, a), the normalized occupancy measure ρ where P(·) is the probability measure and will be replaced by a density function if S or A is continuous. Intuitively, ρ α,π T (s, a) is a distribution of (s, a) in a length-H trajectory {s t, a t} H−1 t=0 with H ∼ Geometric(1 − γ) following the laws of (α, π, T). , the relation between ρ α,π T and (α, π, T) is characterized by the Bellman flow constraint. Specifically, x = ρ α,π T as defined in Eq. 2 is the unique solution to: In addition, Theorem 2 of gives that π(a|s) and ρ α,π T (s, a) have an one-to-one correspondence with α(s) and T (s |s, a) fixed; i.e., π(a|s) is the only policy whose occupancy measure is ρ. With the occupancy measure, the cumulative reward Eq. 1 can be represented as The goal of maximizing the cumulative reward can then be achieved by adjusting ρ α,π T, and this motivates us to adopt distribution matching approaches like WGAN to learn a transition model. In this section, we present a consistency and error bounds for WGAN. All proofs of the following theorems and lemmas can be found in Appendix A. In the setting of MBRL, the training objective for WGAN is By Kantorovich-Rubinstein duality , the optimal value of the inner maximization is exactly is the discounted distribution of (s, a, s). Thus, by minimizing over the choice of T, we are essentially finding p that minimizes W 1 (p(s, a, s)||p (s, a, s)), which gives the consistency . Proposition 1 (Consistency for WGAN). Let T and T be the true and synthesized transitions respectively. If WGAN is trained to its optimal point, we have The support constraint is inevitable because the training data is sampled from ρ T and guaranteeing anything beyond it can be difficult. Still, we will empirically show that the support constraint is not an issue in our experiments because the performance boosts up in the beginning, indicating that Supp(ρ α,π T) may be large enough initially. Now that training with WGAN gives a consistent estimate of the true transition, it is sensible to train a synthesized transition upon it. However, the consistency is too restrictive as it only discusses the optimal case. The next step is to analyze the non-optimal situation and observe how the cumulative reward deviates w.r.t. the training error. Theorem 1 indicates that if WGAN is trained properly, i.e., having small, the cumulative reward on the synthesized trajectory will be close to that on the true trajectory. As MBRL aims to train a policy on the synthesized trajectory, the accuracy of the cumulative reward over the synthesized trajectory is thus the bottleneck. Theorem 1 also implies that WGAN's error is linear to the (expected) length of the trajectory (1 − γ) −1. This is a sharp contrast to the error bounds in most RL literature, as the dependency on the trajectory length is usually quadratic , or of an even higher order. Since WGAN gives us a better estimation of the cumulative reward in the learned model, the policy update becomes more accurate. In this section, we present a practical MBRL method called model imitation (MI) that incorporates the transition learning mentioned in Section 4. Due to the long-term digression, it is hard to train the WGAN directly from a long synthesized trajectory. To tackle this issue, we use the synthesized transition T to sample N short trajectories with initial states sampled from the true trajectory. To analyze this sampling technique, let β < γ be the discount factor of the short trajectories so that the expected length is E is upper bounded by the training error of WGAN on short trajectories, which can be small empirically because the short ones are easier to imitate. and Lemma 1, where d is the dimension of (s, a). , where diam(·) is the diameter. The second term encourages β to be large while the third term does the opposite. Besides, β need not be large if N is large enough; in practice we may sample N short trajectories to reduce the error from W 1 (ρ T ||ρ T) to W 1 (ρ β T ||ρ T). Finally, since ρ β T is the occupancy measure we train on, from the proof of Theorem 1 we deduce that. Thus, WGAN may perform better under this sampling technique. To learn the real transition based on the occupancy measure matching mentioned in Section 4, we employ a transition learning scheme by aligning the distribution of (s, a, s) between the real and the learned environments. Inspired by how GAIL learns to align (s, a) via solving an MDP with rewards extracted from a discriminator, we formulate an MDP with rewards from a discriminator over (s, a, s). Specifically, the WGAN critic f (s, a, s) in Eq. 5 is used as the (psuedo) rewards r(s, a, s) of our MDP. Interestingly, there is a duality between GAIL and our transition learning: for GAIL, the transition is fixed and the objective is to train a policy to maximize the cumulative pseudo rewards, while for our transition learning, the policy is fixed and the objective is to train a synthesized transition to maximize the cumulative pseudo rewards. In practice, since the policy is updated alternatively with the synthesized model, we are required to train a number of WGANs along with the change of the policy. Although the generators across WGANs correspond to the same transition and can be similar, we observe that WGAN may get stuck at a local optimum when we switch from one WGAN training to another. The reason is that unlike GAN that mimics the Jensen-Shannon divergence and hence its inner maximization is upper bounded by log, WGAN mimics the Wasserstein distance and the inner maximization is unbounded from above. Intuitively, such unboundedness makes the WGAN critic so strong that the WGAN generator (the synthesized transition) cannot find a way out and get stuck at a local optimum. Thereby, we have to modify the WGAN objective to alleviate such a situation. To ensure the boundedness, for a fixed δ > 0, we introduce cut-offs at the WGAN objective so that the inner maximization is upper bounded by 2δ: As δ → ∞, Eq. 6 recovers the WGAN objective, Eq. 5. Therefore, this is a truncated version of WGAN. To comprehend Eq. 6 further, notice that it is equivalent to which is a hinge loss version of the generative adversarial objective. Such WGAN is introduced in , where the consistency is provided and further experiments are evaluated in. According to , the inner minimization can be interpreted as the soft-margin SVM. Consequently, it provides a geometric intuition of maximizing margin, which potentially enhances robustness. Finally, because the objective of transition learning is to maximize the cumulative pseudo rewards on the MDP, T does not directly optimize Eq. 7. Note that the truncation only takes part in the inner minimization: which gives us a WGAN critic f (s, a, s). As mentioned, f will be the pseudo reward function. Later, we will introduce a transition learning version of PPO to optimize the cumulative pseudo reward. for N epochs do for n policy epochs do 10: update π θ by TRPO on the data generated by T φ 11: end for 12: end for 13: end for After modifying the WGAN objective, to include both the stochastic and (approximately) deterministic scenarios, the synthesized transition is modeled by a Gaussian distribution T (s |s, a) = T φ (s |s, a) ∼ N (µ φ (s, a), Σ φ (s, a) ). Although the underlying transitions of tasks like MuJoCo are deterministic, modeling by a Gaussian does not harm the transition learning empirically. Recall that the synthesized transition is trained on an MDP whose reward function is the critic of the truncated WGAN. To achieve this goal with proper stability, we employ PPO , which is an efficient approximation of TRPO . Note that although the PPO is originally designed for policy optimization, it can be adapted to transition learning with a fixed sampling policy and the PPO objective (Eq. 7 of) where r t (φ) = T φ (s t+1 |s t, a t) T φold (s t+1 |s t, a t), t: advantage func. derived from the pseudo reward f (s t, a t, s t+1). To enhance stability of the transition learning, in addition to PPO, we also optimize maximum likelihood, which can be regarded as a regularization. We empirically observe that jointly optimizing both maximum likelihood and the PPO objective attains better transition model for policy gradient. The overall loss of the transition learning becomes where L mle is the loss of MLE, which is policy-agnostic and can be estimated with all collected real transitions. For more implementation details, please see Appendix B.1. We consider a training procedure similar to SLBO , where they consider the fact that the value function is dependent on the varying transition model. As a , unlike most of the MBRL methods that have only one pair of model-policy update for each real environment sampling, SLBO proposes to take multiple update pairs for each real environment sampling. Our proposed model imitation (MI) method is summarized in Algorithm 1. In the experiment section, we would like to answer the following questions. Does the proposed model imitation outperforms the state-of-the-art in terms of sample complexity and average return? Does the proposed model imitation benefit from distribution matching and is superior to its model-free and model-based counterparts, TRPO and SLBO? To fairly compare algorithms and enhance reproducibility, we adopt open-sourced environments released along with a model-based benchmark paper , which is based on a physical simulation engine, MuJoCo . Specifically, we evaluate the proposed algorithm MI on four continuous control tasks including Hopper, HalfCheetah, Ant, and Reacher. For hyperparameters mentioned in Algorithm 1 and coefficients such as entropy regularization λ, please refer to Appendix B.2. Table 1: Proportion of bench-marked RL methods that are inferior to MI in terms of 5% t-test. x/y indicates that among y approaches, MI is significantly better than x approaches. The detailed performance can be found in Table 1 of. It should be noted that the reported in We compare to two model-free algorithms, TRPO and PPO , to assess the benefit of utilizing the proposed model imitation since our MI (Algorithm 1) uses TRPO for policy gradient to update the agent policy. We also compare MI to four model-based methods. SLBO gives theoretical guarantees of monotonic improvement for model-based deep RL and proposes to update a joint model-policy objective. PETS propose to employ uncertainty-aware dynamic models with sampling-based uncertainty to capture both aleatoric and epistemic uncertainty. METRPO shows that insufficient data may cause instability and propose to use an ensemble of models to regularize the learning process. STEVE dynamically interpolates among model rollouts of various horizon lengths and favors those whose estimates have lower error. Figure 2 shows the learning curves for all methods. In Hopper, HalfCheetah, and Ant, MI converges fairly fast and learns a policy significantly better than competitors'. In Ant, even though MI does not improve the performance too much from the initial one, the fact that it maintains the average return at around 1,000 indicates that MI can capture a better transition than other methods do with only 5,000 transition data. Even though we do not employ an ensemble of models, the curves show that our learning does not suffer from high variance. In fact, the performance shown in Figure 2 indicates that the variance of MI is lower than that of methods incorporating ensembles such as METRPO and PETS. The questions raised at the beginning of this section can now be answered. The learned model enables TRPO to explore the world without directly access real transitions and therefore TRPO equipped with MI needs much fewer interactions with the real world to learn a good policy. Even though MI is based on the training framework proposed in SLBO, the additional distribution matching component allows the synthesized model to generate similar rollouts to that of the real environments, which empirically gives superior performance because we rely on long rollouts to estimate policy gradient. To better understand the performance presented in Figure 2, we further compare MI with benchmarked RL algorithms recorded in including state-of-the-art MFRL methods such as TD3 and SAC . It should be noted that the reported of are the final performance after 200k time-steps but we only use up to 100k time-steps to train MI. Table 1 indicates that MI significantly outperforms most of the MBRL and MFRL methods with 50% fewer samples, which verifies that MI is more sample-efficient by incorporating distribution matching. We have pointed out that the state-of-the-art methods concentrate on learning synthesized models in a supervised fashion, which does not guarantee that the policy is able to reproduce a similar trajectory in the learned model and therefore the model may not be accurate enough to estimate long rollouts. We have proposed to incorporate WGAN to achieve occupancy measure matching between the real transition and the synthesized model and theoretically shown that matching indicates the closeness in cumulative rewards between the synthesized model and the real environment. To enable stable training across WGANs, we have suggested using a truncated version of WGAN to prevent training from getting stuck at local optimums. The empirical property of WGAN application such as imitation learning indicates its potential to learn the transition with fewer samples than supervised learning. We have confirmed it experimentally by further showing that MI converges much faster and obtains better policy than state-of-the-art model-based and model-free algorithms. A.1 PROOF FOR WGAN Proposition 1 (Consistency for WGAN). Let α(s), π(a|s), T (s |s, a) be initial state distribution, policy and synthesized transition. Let T be the true transition, p(s, a, s) = ρ α,π T (s, a)T (s |s, a) be the discounted distribution of the triple (s, a, s) under the true transition. If the WGAN is trained to its optimal point, we have Proof. Because the loss function of WGAN is the 1-Wasserstein distance, we know p(s, a, s) = p (s, a, s) at its optimal points. Plug in to the Bellman flow constraint Eq., where the first inequality holds because r(s, a)/L r is 1-Lipschitz and the last equality follows from Kantorovich-Rubinstein duality. Since W 1 distance is symmetric, the same holds if we interchange T and T, so we arrive at |R(π, T) − R(π, T)| ≤ L r /(1 − γ). To enhance state exploration, we sample real transitions according to policy β ∼ N (µ θ (s), σ), where µ(s) is the mean of our Gaussian parameterized policy π θ and σ is a fixed standard deviation. In addition, since model the transition as a Gaussian distribution, we found that matching ρ α,π θ T with ρ α,β T is empirically more stable and more sample-efficient than matching ρ α,β T with ρ α,β T. For policy update, it is shown that using the mean µ φ of the Gaussian-parameterized transition can accelerate policy optimization and better balance exploration and exploitation. In order to enforce the Lipschitz constraint to the WGAN critic f, we employ gradient penalty Table 2: List of hyper-parameters adopted in our experiments.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1lJv0VYDr
Our method incorporates WGAN to achieve occupancy measure matching for transition learning.
Batch Normalization (BN) and its variants have seen widespread adoption in the deep learning community because they improve the training of deep neural networks. Discussions of why this normalization works so well remain unsettled. We make explicit the relationship between ordinary least squares and partial derivatives computed when back-propagating through BN. We recast the back-propagation of BN as a least squares fit, which zero-centers and decorrelates partial derivatives from normalized activations. This view, which we term {\em gradient-least-squares}, is an extensible and arithmetically accurate description of BN. To further explore this perspective, we motivate, interpret, and evaluate two adjustments to BN. Training deep neural networks has become central to many machine learning tasks in computer vision, speech, and many other application areas. BID10 showed empirically that Batch Normalization (BN) enables deep networks to attain faster convergence and lower loss. Reasons for the effectiveness of BN remain an open question BID12. Existing work towards explaining this have focused on covariate shift; described how BN makes the loss function smoother. This work examines the details of the back-propagation of BN, and recasts it as a least squares fit. This gradient regression zero-centers and decorrelates partial derivatives from the normalized activations; it passes on a scaled residual during back-propagation. Our view provides novel insight into the effectiveness of BN and several existing alternative normalization approaches in the literature. Foremost, we draw an unexpected connection between least squares and the gradient computation of BN. This motivates a novel view that complements earlier investigations into why BN is so effective. Our view is consistent with recent empirical surprises regarding ordering of layers within ResNet residual maps BID5 and within shake-shake regularization branches BID6. Finally, to demonstrate the extensibility of our view, we motivate and evaluate two variants of BN from the perspective of gradient-least-squares. In the first variant, a least squares explanation motivates the serial chaining of BN and Layer Normalization (LN) BID0. In the second variant, regularization of the least-squares leads to a version of BN that performs better on batch size two. In both variants, we provide empirical support on CIFAR-10.In summary, our work presents a view, which we term gradient-least-squares, through which the back-propagation of BN and related work in a neural network can be recast as least squares regression. This regression decomposes gradients into an explained portion and a residual portion; BN back-propagation will be shown to remove the explained portion. Hopefully, gradient-least-squares will be broadly useful in the future design and understanding of neural network components. Figure 1 reviews normalization with batch statistics, and illustrates our main theorem. Gradient-least-squares relates quantities shown in hexagons Figure 1: The left figure reviews, for a single channel at a particular layer within a single batch, notable quantities computed during the forward pass and during back-propagation of BN. Let DISPLAYFORM0 Let L be a function dependent on the normalized activations z i defined for each j by z j = (x j − µ) σ This, along with partial derivatives, are shown in the left figure. Our work establishes a novel identity on the quantities shown in hexagons. The right figure illustrates our main in a scatter plot, in which each pair z i, ∂L ∂z i is shown as a data point in the regression. Consider any particular channel within which {x i} are activations to be normalized in BN moment calculations. BID10 defined BN as DISPLAYFORM0 where σ, µ are batch moments, but b and c are learned per-channel parameters persistent across batches. In BN, the batch dimension and spatial dimensions are marginalized out in the computation of batch moments. For clarity, we consider a simplified version of BN. We ignore the variables b and c in equation 1 responsible for a downstream channel-wise affine transformation. Ignoring b and c is done without loss of generality, since the main observation in this work will focus on the Gaussian normalization and remains agnostic to downstream computations. We also ignore a numerical stability hyperparameter.We examine back-propagation of partial derivatives through this normalization, where µ and σ are viewed as functions of x. Notably, µ and σ are functions of each x i, and thus the division by σ is not affine. We write the normalized output as DISPLAYFORM1 We review ordinary least squares of a single variable with intercept BID2.Let g j = α + βz j + j where α and β are parameters, z and g are observations. z j and g j are entries in z and g respectively. j are i.i.d. Gaussian residuals. We wish to fit α and β α,β = arg min DISPLAYFORM2 The least-squares problem in equation 3 is satisfied byβ = Cov(z, g) DISPLAYFORM3 When z are normalized activations and g are partial derivatives, then Ez = 0 and Var(z) = 1. In this special case, the solution simplifies intô DISPLAYFORM4 Theorem 1 (Normalization gradients are least-squares residuals). Let i ∈ {1 . . . N} be indices over some set of activations {x i}. Then the moment statistics are defined by µ = N i=1x i N and DISPLAYFORM5 Let L be a function dependent on the normalized activations z i defined for each j by z j = (x j − µ) σ. Then, the gradients of L satisfy, for all j ∈ {1, . . ., N}, the following: DISPLAYFORM6 where DISPLAYFORM7 Proof: Normalization gradients are least-squares residuals. The proof involves a derivation of partial derivatives by repeated applications of the chain rule and rules of total derivative. Because {z i} normalized over i has mean 0 and variance 1, the partial derivatives can be rearranged to satisfy the single variable ordinary least squares framework. Fix j. We expand ∂L ∂x j as a linear combination of DISPLAYFORM8 We state ∂z i ∂x j directly. Steps are in Appendix A under Lemma 1. DISPLAYFORM9 Through substitution of equations 10 into 9, we get DISPLAYFORM10 Noting that {z i} normalized over i has mean 0 and variance 1, we recoverβ andα, in the sense of equations 4 and 5, from equation 13. DISPLAYFORM11 Finally, we rearrange equations 15 and 14 into 13 to conclude, as desired, DISPLAYFORM12 During back-propagation of a single batch, the normalization function takes in partial derivatives ∂L ∂z (·), and removes that which can be explained by least squares of ∂L ∂z (·) against z (·). As illustrated in Figure 1, during back-propagation, the residual then divides away σ to become ∂L ∂x (·), the gradient for the unnormalized activations. CALCULATIONS BN aims to control its output to have mean near 0 and variance near 1, normalized over the dataset; this is related to the original explanation termed internal covariate shift BID10. Most existing work that improve or re-purpose BN have focused on describing the distribution of activations. Definition 1. In the context of normalization layers inside a neural network, activations are split into partitions, within which means and variances are computed. We refer to these partitions as normalization partitions. Definition 2. Within the context of a normalization partition, we refer to the moments calculated on the partitions as partition statistics. Theorem 1 shows that BN has least squares fitting built into the gradient computation. Gradients of the activations being normalized in each batch moment calculation are fit with a single-variable with-intercept least squares model, and only a rescaled residual is kept during back-propagation. We emphasize that the data on which the regression is trained and applied is a subset of empirical activations within a batch, corresponding to the normalization partitions of BN.To show extensibility, we recast several popular normalization techniques into the gradient-leastsquares view. We refer to activations arising from a single member of a particular batch as an item. BHW C refers to dimensions corresponding to items, height, width, and channels respectively. In non-image applications or fully connected layers, H and W are 1. BN marginalizes out the items and spatial dimensions, but statistics for each channel are kept separate. In the subsequent sections, we revisit several normalization methods from the perspective of the gradient. FIG1 reviews the normalization partitions of these methods, and places our main theorem about gradient-least-squares into context. BID0 introduced Layer Normalization (LN) in the context of large LSTM models and recurrent networks. Only the (H, W, C) dimensions are marginalized in LN, whereas BN marginalizes out the (B, H, W) dimensions. In our regression framework, the distinction can be understood as changing the data point partitions in which least squares are fit during back-propagation. LN marginalizes out the channels, but computes separate statistics for each batch item. To summarize, the regression setup in the back-propagation of LN is performed against other channels, rather than against other batch items. introduced Instance Normalization (IN) in the context of transferring styles across images. IN is is closely related to contrast normalization, an older technique used in image processing. IN emphasizes end-to-end training with derivatives passing through the moments. Only the (H, W) dimensions are marginalized in IN, whereas BN marginalizes (B, H, W) dimensions. In our framework, this can be understood as using fewer data points and a finer binning to fit the least squares during back-propagation, as each batch item now falls into its own normalization partition. introduced Group Normalization (GN) to improve performance on image-related tasks when memory constrains the batch size. Similar to LN, GN also marginalizes out the (H, W, C) dimensions in the moment computations. The partitions of GN are finer: the channels are grouped into disjoint sub-partitions, and the moments are computed for each sub-partition. When the number of groups is one, GN reduces to LN.In future normalization methods that involve normalizing with respect to different normalization partitions; such methods can pattern match with BN, LN, IN, or GN; the back-propagation can be formulated as a least-squares fit, in which the partial derivatives at normalized activations ∂L ∂z (·) are fitted against the normalized z (·), and then the residual of the fit is rescaled to become ∂L ∂x (·).Figure 2 summarize the normalization partitions for BN, LN, IN, and GN; the figure visualizes, as an example, a one-to-one correspondence between an activation in BN, and a data point in the gradient regression. Theorem 1 is agnostic to the precise nature of how activations are partitioned before being normalized; thus, equation 9 applies directly to any method that partitions activations and performs Gaussian normalization on each partition. DISPLAYFORM0 DISPLAYFORM1 The L2 normalization of weights in WN appears distinct from the Gaussian normalization of activations in BN; nevertheless, WN can also be recast as a least squares regression. BID4 and improved BID5 residual mappings in ResNets. Arrows point in the direction of the forward pass. Dotted lines indicate that gradients are zero-centered and decorrelated with respect to downstream activations in the residual mapping. The improved ordering has BN coming first, and thus constrains that gradients of the residual map must be decorrelated with respect to some normalized activations inside the residual mapping. An update to the popular ResNet architecture showed that the network's residual mappings can be dramatically improved with a new ordering BID5. The improvement moved BN operations into early positions and surprised the authors; we support the change from the perspective of gradient-least-squares. FIG2 reviews the precise ordering in the two versions. BID6 provides independent empirical support for the BN-early order, in shake-shake regularization BID3 architectures. We believe that the surprise arises from a perspective that views BN only as a way to control the distribution of activations; one would place BN after a sequence of convolution layers. In the gradient-least-squares perspective, the first layer of each residual mapping is also the final calculation for these gradients before they are added back into BID0 0.9102 0.3548 the main trunk. The improved residual branch constrains the gradients returning from the residual mappings to be zero-centered and decorrelated with respect to some activations inside the branch. We illustrate this idea in FIG2. Gradient-least-squares views back-propagation in deep neural networks as a solution to a regression problem. Thus, formulations and ideas from a regression perspective would motivate improvements and alternatives to BN. We pursue and evaluate two of these ideas. BN and LN are similar to each other, but they normalize over different partitioning of the activations; in back-propagation, the regressions occur respectively with respect to different partitions of the activations. Suppose that a BN and a LN layer are chained serially in either order. This in a two-step regression during back-propagation; in reversed order, the residual from the first regression is further explained by a second regression on a different partitioning. In principle, whether this helps would depend on the empirical characteristics of the gradients encountered during training. The second regression could further decorrelate partial gradients from activations. Empirically, we show improvement in a reference ResNet-34-v2 implementation on CIFAR-10 relative to BN with batch size 128. In all cases, only a single per-channel downstream affine transformation is applied, after both normalization layers, for consistency in the number of parameters. See table 1 for CIFAR-10 validation performances. We kept all default hyperparameters from the reference implementation: learning schedules, batch sizes, and optimizer settings. BN performs less well on small batches BID9. Gradient-least-squares interprets this as gradient regressions failing on correlated data, an issue typically addressed by regularization. We pursue this idea to recover some performance on small batches by use of regularization. Our regularization uses streaming estimates of past gradients to create virtual data in the regression. This performed better than standard BN on the same batch size, but we did not recover the performance of large batches; this is consistent with the idea that regularization could not in general compensate for having much less data. See Appendix C for CIFAR-10 validation performances. DISPLAYFORM0 to rescale the contributions to the batch mean for each normalization scheme. It uses an analogous set of parameters λ k and activations w k for variances. We sketch the back-propagation of a simplified version of SN in the perspective of gradient-least-squares. We ignore both the division and downstream affine z → c · z + b. The normalization calculation inside SwN can be written as: DISPLAYFORM1 where Ω = {BN, LN, IN}. There is potentially a unique mean and variance used for each activation. Equation 19 bears similarities to the setup in Theorem 1, but we leave unresolved whether there is a gradient-least-squares regression interpretation for SN. Decorrelated Batch Normalization (DBN) ) is a generalization of BN that performs Mahalanobis ZCA whitening to decorrelate the channels, using differentiable operations. On some level, the matrix gradient equation resemble the least squares formulation in Theorem 1.Spectral Normalization (SpN) BID15 is an approximate spectral generalization of WN. For DBN and SpN, the regression interpretations remain unresolved. BN has been instrumental in the training of deeper networks BID10. Subsequent work ed in Batch Renormalization BID9, and further emphasized the importance of passing gradients through the minibatch moments, instead of a gradient-free exponential running average. In gradient-least-squares, use of running accumulators in the training forward pass would stop the gradients from flowing through them during training, and there would be no least-squares. BID5 demonstrate empirically the unexpected advantages of placing BN early in residual mappings of ResNet. Santurkar et al. FORMULA1 showed that BN makes the loss landscape smoother, and gradients more predictable across stochastic gradient descent steps. BID1 found evidence that spatial correlation of gradients explains why ResNet outperforms earlier designs of deep neural networks. BID11 proved that BN accelerates convergence on least squares loss, but did not consider back-propagation of BN as a least squares residual. BID14 has recast BN as a stochastic process, ing in a novel treatment of regularization. This work makes explicit how BN back-propagation regresses partial derivatives against the normalized activations and keeps the residual. This view, in conjunction with the empirical success of BN, suggests an interpretation of BN as a gradient regression calculation. BN and its variants decorrelate and zero-center the gradients with respect to the normalized activations. Subjectively, this can be viewed as removing systematic errors from the gradients. Our view also support empirical in literature preferring early BN placement within neural network branches. Leveraging gradient-least-squares considerations, we ran two sets of normalization experiments, applicable to large batch and small batch settings. Placing a LN layer either before or after BN can be viewed as two-step regression that better explains the residual. We show empirically on CIFAR-10 that BN and LN together are better than either individually. In a second set of experiments, we address BN's performance degradation with small batch size. We regularize the gradient regression with streaming gradient statistics, which empirically recovers some performance on CIFAR-10 relative to basic BN, on batch size two. Why do empirical improvements in neural networks with BN keep the gradient-least-squares residuals and drop the explained portion? We propose two open approaches for investigating this in future work. A first approach focuses on how changes to the gradient regression in different formulations; the two empirical experiments in our work contribute to this. A second approach examines the empirical relationships between gradients of activations evaluated on the same parameter values; we can search for a shared noisy component arising from gradients in the same normalization partition. Suppose that the gradient noise correlates with the activations -this is plausible because the population of internal activations arise from using shared weights -then normalizations could be viewed as a layer that removes systematic noise during back-propagation. In DISPLAYFORM0 Then, the partial derivatives satisfy DISPLAYFORM1 Proof. In deriving ∂z j ∂x i, we will treat the cases of when j = i and when j = i separately. We start by examining intermediate quantities of interest as a matter of convenience for later use. We define helper quantities u i = x i − µ. Note that each u j depends on all of x i via µ. Next, we write out useful identities DISPLAYFORM2 We prepare to differentiate with rule of total derivative: DISPLAYFORM3 Making use of equations 21, 22, 23 and 25, We simplify ∂σ ∂x i for any i as follows. DISPLAYFORM4 We apply the quotient rule on ∂z j ∂x i when j = i, then substitute equation 33 DISPLAYFORM5 Similarly, when i = j, inputs in batch b. In our work, we keep track of am exponential running estimates across batches, DISPLAYFORM6 DISPLAYFORM7 DISPLAYFORM8 that marginalize the (B, H, W) dimensions into accumulators of shape C. The b subscript of the outer expectation is slightly abusive notation indicating thatα * andβ * are running averages across recent batches with momentum as a hyperparameter that determines the weighting. We regularize the gradient regression with virtual activations and virtual gradients, defined as follows. We append two virtual batch items, broadcast to an appropriate shape, x + = µ b + σ b and x − = µ b − σ b. Here, µ b and σ b are batch statistics of the real activations. The concatenated tensor undergoes standard BN, which outputs the usual {z i} for the real activations, but z + = 1 and z − = −1 for the virtual items. The z + and z − do not affect the feed forward calculations, but they receive virtual gradients during back-propagation: DISPLAYFORM9 Virtual data z +, ∂L ∂z + and z −, ∂L ∂z − regularizes the gradient-least-squares regression. ∂L ∂z + and ∂L ∂z − eventually modify the gradients received by the real x i activations. The virtual data can be weighted with hyperparameters. In our experiments, we see improvements, robust to a hyperparameter cross-product search over the weightings and the momentum forα * andβ *. The momentum forα * andβ * were in {.997, .5} and the virtual item weights were in {2 i−1} i∈{0,1,2,3}. The performance of larger batches are not recovered; regularized regression could not be reasonably expected to recover the performance of regressing with more data. See table 2 for final validation performances with a reference Tensorflow ResNet-34-v2 implementation on batch size of two. The baseline evaluation with identity (no normalization) experienced noticeable overfitting in terms of cross entropy but not accuracy. The base learning rate was multiplied by 1 64 relative to the baseline rate used in runs with batch size 128.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkMq0oRqFQ
Gaussian normalization performs a least-squares fit during back-propagation, which zero-centers and decorrelates partial derivatives from normalized activations.
Batch Normalization (BN) has become a cornerstone of deep learning across diverse architectures, appearing to help optimization as well as generalization. While the idea makes intuitive sense, theoretical analysis of its effectiveness has been lacking. Here theoretical support is provided for one of its conjectured properties, namely, the ability to allow gradient descent to succeed with less tuning of learning rates. It is shown that even if we fix the learning rate of scale-invariant parameters (e.g., weights of each layer with BN) to a constant (say, 0.3), gradient descent still approaches a stationary point (i.e., a solution where gradient is zero) in the rate of T^{−1/2} in T iterations, asymptotically matching the best bound for gradient descent with well-tuned learning rates. A similar with convergence rate T^{−1/4} is also shown for stochastic gradient descent. Batch Normalization (abbreviated as BatchNorm or BN) is one of the most important innovation in deep learning, widely used in modern neural network architectures such as ResNet BID8, Inception , and DenseNet . It also inspired a series of other normalization methods (; BID0 ;).BatchNorm consists of standardizing the output of each layer to have zero mean and unit variance. For a single neuron, if x 1,..., x B is the original outputs in a mini-batch, then it adds a BatchNorm layer which modifies the outputs to DISPLAYFORM0 where µ = i=1 (x i − µ) 2 are the mean and variance within the minibatch, and γ, β are two learnable parameters. BN appears to stabilize and speed up training, and improve generalization. The inventors suggested that these benefits derive from the following:1. By stabilizing layer outputs it reduces a phenomenon called Internal Covariate Shift, whereby the training of a higher layer is continuously undermined or undone by changes in the distribution of its inputs due to parameter changes in previous layers., 2. Making the weights invariant to scaling, appears to reduce the dependence of training on the scale of parameters and enables us to use a higher learning rate;3. By implictly regularizing the model it improves generalization. But these three benefits are not fully understood in theory. Understanding generalization for deep models remains an open problem (with or without BN). Furthermore, in demonstration that intuition can sometimes mislead, recent experimental suggest that BN does not reduce internal covariate shift either , and the authors of that study suggest that the true explanation for BN's effectiveness may lie in a smoothening effect (i.e., lowering of the Hessian norm) on the objective. Another recent paper tries to quantify the benefits of BN for simple machine learning problems such as regression but does not analyze deep models. Provable quantification of Effect 2 (learning rates). Our study consists of quantifying the effect of BN on learning rates. observed that without BatchNorm, a large learning rate leads to a rapid growth of the parameter scale. Introducing BatchNorm usually stabilizes the growth of weights and appears to implicitly tune the learning rate so that the effective learning rate adapts during the course of the algorithm. They explained this intuitively as follows. After BN the output of a neuron z = BN(w x) is unaffected when the weight w is scaled, i.e., for any scalar c > 0, BN(w x) = BN((cw) x).Taking derivatives one finds that the gradient at cw equals to the gradient at w multiplied by a factor 1/c. Thus, even though the scale of weight parameters of a linear layer proceeding a BatchNorm no longer means anything to the function represented by the neural network, their growth has an effect of reducing the learning rate. Our paper considers the following question: Can we rigorously capture the above intuitive behavior? Theoretical analyses of speed of gradient descent algorithms in nonconvex settings study the number of iterations required for convergence to a stationary point (i.e., where gradient vanishes). But they need to assume that the learning rate has been set (magically) to a small enough number determined by the smoothness constant of the loss function -which in practice are of course unknown. With this tuned learning rate, the norm of the gradient reduces asymptotically as T −1/2 in T iterations. In case of stochastic gradient descent, the reduction is like T −1/4. Thus a potential way to quantify the rate-tuning behavior of BN would be to show that even when the learning rate is fixed to a suitable constant, say 0.1, from the start, after introducing BN the convergence to stationary point is asymptotically just as fast (essentially) as it would be with a hand-tuned learning rate required by earlier analyses. The current paper rigorously establishes such auto-tuning behavior of BN (See below for an important clarification about scale-invariance).We note that a recent paper introduced a new algorithm WNgrad that is motivated by BN and provably has the above auto-tuning behavior as well. That paper did not establish such behavior for BN itself, but it was a clear inspiration for our analysis of BN.Scale-invariant and scale-variant parameters. The intuition of applies for all scale-invariant parameters, but the actual algorithm also involves other parameters such as γ and β whose scale does matter. Our analysis partitions the parameters in the neural networks into two groups W (scale-invariant) and g (scale-variant). The first group, W = {w,..., w (m) }, consists of all the parameters whose scales does not affect the loss, i.e., scaling w (i) to cw (i) for any c > 0 does not change the loss (see Definition 2.1 for a formal definition); the second group, g, consists of all other parameters that are not scale-invariant. In a feedforward neural network with BN added at each layer, the layer weights are all scale-invariant. This is also true for BN with p normalization strategies and other normalization layers, such as Weight Normalization , Layer Normalization BID0 ), Group Normalization (see Table 1 in BID0 for a summary). In this paper, we show that the scale-invariant parameters do not require rate tuning for lowering the training loss. To illustrate this, we consider the case in which we set learning rates separately for scale-invariant parameters W and scale-variant parameters g. Under some assumptions on the smoothness of the loss and the boundedness of the noise, we show that 1. In full-batch gradient descent, if the learning rate for g is set optimally, then no matter how the learning rates for W is set, (W ; g) converges to a first-order stationary point in the rate DISPLAYFORM0, which asymptotically matches with the convergence rate of gradient descent with optimal choice of learning rates for all parameters (Theorem 3.1); 2. In stochastic gradient descent, if the learning rate for g is set optimally, then no matter how the learning rate for W is set, (W ; g) converges to a first-order stationary point in the rate O(T −1/4 polylog(T)), which asymptotically matches with the convergence rate of gradient descent with optimal choice of learning rates for all parameters (up to a polylog(T) factor) (Theorem 4.2).In the usual case where we set a unified learning rate for all parameters, our imply that we only need to set a learning rate that is suitable for g. This means introducing scale-invariance into neural networks potentially reduces the efforts to tune learning rates, since there are less number of parameters we need to concern in order to guarantee an asymptotically fastest convergence. In our study, the loss function is assumed to be smooth. However, BN introduces non-smoothness in extreme cases due to division by zero when the input variance is zero (see equation 1). Note that the suggested implementation of BN by uses a smoothening constant in the whitening step, but it does not preserve scale-invariance. In order to avoid this issue, we describe a simple modification of the smoothening that maintains scale-invariance. Also, our cannot be applied to neural networks with ReLU, but it is applicable for its smooth approximation softplus BID5 ).We include some experiments in Appendix D, showing that it is indeed the auto-tuning behavior we analysed in this paper empowers BN to have such convergence with arbitrary learning rate for scale-invariant parameters. In the generalization aspect, a tuned learning rate is still needed for the best test accuracy, and we showed in the experiments that the auto-tuning behavior of BN also leads to a wider range of suitable learning rate for good generalization. Previous work for understanding Batch Normalization. Only a few recent works tried to theoretically understand BatchNorm. was described earlier. aims to find theoretical setting such that training neural networks with BatchNorm is faster than without BatchNorm. In particular, the authors analyzed three types of shallow neural networks, but rather than consider gradient descent, the authors designed task-specific training methods when discussing neural networks with BatchNorm. BID1 observes that the higher learning rates enabled by BatchNorm improves generalization. Convergence of adaptive algorithms. Our analysis is inspired by the proof for WNGrad , where the author analyzed an adaptive algorithm, WNGrad, motivated by Weight Normalization . Other works analyzing the convergence of adaptive methods are (; ; ;).Invariance by Batch Normalization. BID3 proposed to run riemmanian gradient descent on Grassmann manifold G(1, n) since the weight matrix is scaling invariant to the loss function. observed that the effective stepsize is proportional to ηw wt 2. In this section, we introduce our general framework in order to study the benefits of scale-invariance. Scale-invariance is common in neural networks with BatchNorm. We formally state the definition of scale-invariance below:Definition 2.1. (Scale-invariance) Let F(w, θ) be a loss function. We say that w is a scale-invariant parameter of F if for all c > 0, F(w, θ) = F(cw, θ); if w is not scale-invariant, then we say w is a scale-variant parameter of F.We consider the following L-layer "fully-batch-normalized" feedforward network Φ for illustration: DISPLAYFORM0 } is a mini-batch of B pairs of input data and ground-truth label from a data set D. f y is an objective function depending on the label, e.g., f y could be a cross-entropy loss in classification tasks. W,..., W (L) are weight matrices of each layer. σ: R → R is a nonlinear activation function which processes its input elementwise (such as ReLU, sigmoid). Given a batch of inputs z 1,..., z B ∈ R m, BN(z b) outputs a vectorz b defined as DISPLAYFORM1 where DISPLAYFORM2 are the mean and variance of z b, γ k and β k are two learnable parameters which rescale and offset the normalized outputs to retain the representation power. The neural network Φ is thus parameterized by weight matrices W (i) in each layer and learnable parameters γ k, β k in each BN.BN has the property that the output is unchanged when the batch inputs z 1,k,..., z B,k are scaled or shifted simultaneously. For z b,k = w kx b being the output of a linear layer, it is easy to see that w k is scale-invariant, and thus each row vector of weight matrices DISPLAYFORM3 In convolutional neural networks with BatchNorm, a similar argument can be done. In particular, each filter of convolutional layer normalized by BN is scale-invariant. With a general nonlinear activation, other parameters in Φ, the scale and shift parameters γ k and β k in each BN, are scale-variant. When ReLU or Leaky ReLU are used as the activation σ, the vector (γ 1, . . ., γ m, β 1, . . ., β m) of each BN at layer 1 ≤ i < L (except the last one) is indeed scale-invariant. This can be deduced by using the the (positive) homogeneity of these two types of activations and noticing that the output of internal activations is processed by a BN in the next layer. Nevertheless, we are not able to analyse either ReLU or Leaky ReLU activations because we need the loss to be smooth in our analysis. We can instead analyse smooth activations, such as sigmoid, tanh, softplus BID5, etc. Now we introduce our general framework. Let Φ be a neural network parameterized by θ. Let D be a dataset, where each data point z ∼ D is associated with a loss function F z (θ) (D can be the set of all possible mini-batches). We partition the parameters θ into (W ; g), where W = {w,..., w (m) } consisting of parameters that are scale-invariant to all F z, and g contains the remaining parameters. The goal of training the neural network is to minimize the expected loss over the dataset: DISPLAYFORM0 In order to illustrate the optimization benefits of scaleinvariance, we consider the process of training this neural network by stochastic gradient descent with separate learning rates for W and g: DISPLAYFORM1 Thanks to the scale-invariant properties, the scale of each weight w (i) does not affect loss values. However, the scale does affect the gradients. Let V = {v,..., v (m) } be the set of normalized weights, where DISPLAYFORM0 2. The following simple lemma can be easily shown: Lemma 2.2 (Implied by). For any W and g, DISPLAYFORM1 To make ∇ w (i) F z (W ; g) 2 to be small, one can just scale the weights by a large factor. Thus there are ways to reduce the norm of the gradient that do not reduce the loss. For this reason, we define the intrinsic optimization problem for training the neural network. Instead of optimizing W and g over all possible solutions, we focus on parameters θ in which w DISPLAYFORM2 This does not change our objective, since the scale of W does not affect the loss. 2 = 1 for all i} be the intrinsic domain. The intrinsic optimization problem is defined as optimizing the original problem in U: DISPLAYFORM0 For {θ t} being a sequence of points for optimizing the original optimization problem, we can define {θ t}, whereθ t = (V t ; g t), as a sequence of points optimizing the intrinsic optimization problem. In this paper, we aim to show that training neural network for the original optimization problem by gradient descent can be seen as training by adaptive methods for the intrinsic optimization problem, and it converges to a first-order stationary point in the intrinsic optimization problem with no need for tuning learning rates for W. We assume F z (W ; g) is defined and twice continuously differentiable at any θ satisfying none of DISPLAYFORM0 2, we assume that the following bounds on the smoothness: DISPLAYFORM1 In addition, we assume that the noise on the gradient of g in SGD is upper bounded by G g: DISPLAYFORM2 Smoothed version of motivating neural networks. Note that the neural network Φ illustrated in Section 2.1 does not meet the conditions of the smooothness at all since the loss function could be non-smooth. We can make some mild modifications to the motivating example to smoothen it 1:. The activation could be non-smooth. A possible solution is to use smooth nonlinearities, e.g., sigmoid, tanh, softplus BID5, etc. Note that softplus can be seen as a smooth approximation of the most commonly used activation ReLU.. The formula of BN shown in equation 3 may suffer from the problem of division by zero. To avoid this, the inventors of , add a small smoothening parameter > 0 to the denominator, i.e.,z DISPLAYFORM3 However, when z b,k = w kx b, adding a constant directly breaks the scale-invariance of w k. We can preserve the scale-invariance by making the smoothening term propositional to w k 2, i.e., replacing with w k 2. By simple linear algebra and letting DISPLAYFORM4, this smoothed version of BN can also be written as DISPLAYFORM5 Since the variance of inputs is usually large in practice, for small, the effect of the smoothening term is negligible except in extreme cases. Using the above two modifications, the loss function is already smooth. However, the scale of scale-variant parameters may be unbounded during training, which could cause the smoothness unbounded. To avoid this issue, we can either project scale-variant parameters to a bounded set, or use weight decay for those parameters (see Appendix C for a proof for the latter solution). The following lemma is our key observation. It establishes a connection between the scale-invariant property and the growth of weight scale, which further implies an automatic decay of learning rates:Lemma 2.4. For any scale-invariant weight w (i) in the network Ψ, we have: DISPLAYFORM0. w DISPLAYFORM1 Proof. Let θ t be all the parameters in θ t other than w (i)t. Taking derivatives with respect to c for the both sides of F zt (w DISPLAYFORM2 t, so the first proposition follows by taking c = 1. Applying Pythagorean theorem and Lemma 2.2, the second proposition directly follows. Using Lemma 2.4, we can show that performing gradient descent for the original problem is equivalent to performing an adaptive gradient method for the intrinsic optimization problem: DISPLAYFORM3 where Π is a projection operator which maps any vector w to w/ w 2 .Remark 2.6. Wu et al. FORMULA0 noticed that Theorem 2.5 is true for Weight Normalization by direct calculation of gradients. Inspiring by this, they proposed a new adaptive method called WNGrad. Our theorem is more general since it holds for any normalization methods as long as it induces scale-invariant properties to the network. The adaptive update rule derived in our theorem can be seen as WNGrad with projection to unit sphere after each step. Proof for Theorem 2.5. Using Lemma 2.2, we have DISPLAYFORM4 which implies the first equation. The second equation is by Lemma 2.4.While popular adaptive gradient methods such as AdaGrad BID4, RMSprop (Tieleman Assumptions on learning rates. We consider the case that we use fixed learning rates for both W and g, i.e., η w,0 = · · · = η w,T −1 = η w and η g,0 = · · · = η g,T −1 = η g . We assume that η g is tuned carefully to η g = (1 − c g)/L gg for some constant c g ∈. For η w, we do not make any assumption, i.e., η w can be set to any positive value. Theorem 3.1. Consider the process of training Φ by gradient descent with η g = 2(1 − c g)/L gg and arbitrary η w > 0. Then Φ converges to a stationary point in the rate of DISPLAYFORM0 where DISPLAYFORM1 This matches the asymptotic convergence rate of GD by BID2. The high level idea is to use the decrement of loss function to upper bound the sum of the squared norm of the gradients. Note that ∇L(DISPLAYFORM0 Thus the core of the proof is to show that the monotone increasing w DISPLAYFORM1 T 2 has an upper bound for all T . It is shown that for every w (i), the whole training process can be divided into at most two phases. In the first phase, the effective learning rate η w / w DISPLAYFORM2 2 is larger than some threshold 1 Ci (defined in Lemma 3.2) and in the second phase it is smaller. 2 is large enough and that the process enters the second phase, then by Lemma 3.2 in each step the loss function L will decrease by DISPLAYFORM0 by Lemma 2.4). Since L is lower-bounded, we can conclude w DISPLAYFORM1 2 is also bounded. For the second part, we can also show that by Lemma 3.2 DISPLAYFORM2 Thus we can concludeÕ(DISPLAYFORM3) convergence rate of ∇L(θ t) 2 as follows. DISPLAYFORM4 The full proof is postponed to Appendix A. In this section, we analyze the effect related to the scale-invariant properties when training a neural network by stochastic gradient descent. We use the framework introduced in Section 2.2 and assumptions from Section 2.4. Assumptions on learning rates. As usual, we assume that the learning rate for g is chosen carefully and the learning rate for W is chosen rather arbitrarily. More specifically, we consider the case that the learning rates are chosen as DISPLAYFORM0 We assume that the initial learning rate η g of g is tuned carefully to η g = (1 − c g)/L gg for some constant c g ∈. Note that this learning rate schedule matches the best known convergence rate O(T −1/4) of SGD in the case of smooth non-convex loss functions BID6.For the learning rates of W, we only assume that 0 ≤ α ≤ 1/2, i.e., the learning rate decays equally as or slower than the optimal SGD learning rate schedule. η w can be set to any positive value. Note that this includes the case that we set a fixed learning rate η w,0 = · · · = η w,T −1 = η w for W by taking α = 0. Remark 4.1. Note that the auto-tuning behavior induced by scale-invariances always decreases the learning rates. Thus, if we set α > 1/2, there is no hope to adjust the learning rate to the optimal strategy Θ(t −1/2). Indeed, in this case, the learning rate 1/G t in the intrinsic optimization process decays exactly in the rate ofΘ(t −α), which is the best possible learning rate can be achieved without increasing the original learning rate. Theorem 4.2. Consider the process of training Φ by gradient descent with η w,t = η w · (t + 1) DISPLAYFORM1 gg and η w > 0 is arbitrary. Then Φ converges to a stationary point in the rate of DISPLAYFORM2 where DISPLAYFORM3 and we see L gg = Ω.Note that this matches the asymptotic convergence rate of SGD, within a polylog(T) factor. We delay the full proof into Appendix B and give a proof sketch in a simplified setting where there is no g and α ∈ [0, 1 2). We also assume there's only one w i, that is, m = 1 and omit the index i. By Taylor expansion, we have DISPLAYFORM0 We can lower bound the effective learning rate η w,T w T 2 and upper bound the second order term respectively in the following way:. For all 0 ≤ α < 1 2, the effective learning rate DISPLAYFORM1. DISPLAYFORM2 Taking expectation over equation 14 and summing it up, we have DISPLAYFORM3 Plug the above bounds into the above inequality, we complete the proof. DISPLAYFORM4 In this paper, we studied how scale-invariance in neural networks with BN helps optimization, and showed that (stochastic) gradient descent can achieve the asymptotic best convergence rate without tuning learning rates for scale-invariant parameters. Our analysis suggests that scale-invariance in nerual networks introduced by BN reduces the efforts for tuning learning rate to fit the training data. However, our analysis only applies to smooth loss functions. In modern neural networks, ReLU or Leaky ReLU are often used, which makes the loss non-smooth. It would have more implications by showing similar in non-smooth settings. Also, we only considered gradient descent in this paper. It can be shown that if we perform (stochastic) gradient descent with momentum, the norm of scale-invariant parameters will also be monotone increasing. It would be interesting to use it to show similar convergence for more gradient methods. By the scale-invariant property of w (i), we know that L(W ; g) = L(V ; g). Also, the following identities about derivatives can be easily obtained: DISPLAYFORM0 Thus, the assumptions on the smoothness imply DISPLAYFORM1 Proof for Lemma 3.2. Using Taylor expansion, we have ∃γ ∈, such that for w DISPLAYFORM2 Note that w DISPLAYFORM3 t, we have DISPLAYFORM4 Thus, DISPLAYFORM5 By the inequality of arithmetic and geometric means, we have DISPLAYFORM6 Taking ∆w DISPLAYFORM7 We can complete the proof by replacing DISPLAYFORM8 Using the assumption on the smoothness, we can show that the gradient with respect to w (i) is essentially bounded: Lemma A.1. For any W and g, we have DISPLAYFORM9 Proof. A.1 Fix all the parameters except w (i). Then L(W ; g) can be written as a function f (w DISPLAYFORM10 Since f is continuous and S is compact, there must exist v DISPLAYFORM11 min is also a minimum in the entire domain and ∇f (w DISPLAYFORM12 For an arbitrary DISPLAYFORM13, and h goes along the geodesic from v (i) min to v (i) on the unit sphere S with constant speed. Let H(τ) = ∇f (h(τ)). By Taylor expansion, we have DISPLAYFORM14 where we use the fact that w DISPLAYFORM15 at the third line. We can bound S DISPLAYFORM16 Combining them together, we have DISPLAYFORM17 Taking sum over all i = 1,..., m and also subtracting G T on the both sides, we have DISPLAYFORM18 where DISPLAYFORM19 T + G T is used at the second line. Combining the lemmas above together, we can obtain our . Proof for Theorem 3.1. By Lemma A.2, we have DISPLAYFORM20 Thus min 0≤t<T ∇L(V t, g t) 2 converges in the rate of DISPLAYFORM21 Let F t = σ{z 0, . . ., z t−1} be the filtration, where σ{·} denotes the sigma field. We use L t:= L zt (θ t), F t:= F zt (θ t) for simplicity. As usual, we define v DISPLAYFORM0 i. Let k be the maximum i such that t i exists. Let t k+1 = T + 1. Then we know that DISPLAYFORM1 Thus, DISPLAYFORM2 Proof. Conditioned on F t, by Taylor expansion, we have DISPLAYFORM3 where Q t is DISPLAYFORM4 By the inequality DISPLAYFORM5 Taking this into equation 21 and summing up for all t, we have DISPLAYFORM6 and the right hand side can be expressed as DISPLAYFORM7 Proof for Theorem 4.2. Combining Lemma B.2 and Lemma B.4, for 0 ≤ α < 1/2, we have DISPLAYFORM8 2 +Õ(log T).Thus, DISPLAYFORM9 Similarly, for α = 1/2, we have DISPLAYFORM10 2 +Õ(log T).Thus, DISPLAYFORM11 In this section we prove that the modified version of the motivating neural network does meet the assumptions in Section 2.4. More specifically, we assume:• We use the network structure Φ in Section 2.1 with the smoothed variant of BN as described in Section 2.4;• The objective f y (·) is twice continuously differentiable, lower bounded by f min and Lipschitz (|f y (ŷ)| ≤ α f );• The activation σ(·) is twice continuously differentiable and Lipschitz (|f y (ŷ)| ≤ α σ );• We add an extra weight decay (L2 regularization) term First, we show that g t (containing all scale and shift parameters in BN) is bounded during the training process. Then the smoothness follows compactness using Extreme Value Theorem. We use the following lemma to calculate back propagation: DISPLAYFORM0 Proof. Letx ∈ R B be the vector wherex b:= (w (x b −u))/ w S+ I. It is easy to see x 2 2 ≤ B. Then DISPLAYFORM1 For x b, we have DISPLAYFORM2 Thus, DISPLAYFORM3 Lemma C.2. If g 0 2 is bounded by a constant, there exists some constant K such that g t 2 ≤ K.Proof. Fix a time t in the training process. Consider the process of back propagation. Define DISPLAYFORM4 b,k is the output of the k-th neuron in the i-th layer in the b-th data sample in the batch. By the Lipschitzness of the objective, R L can be bounded by a constant. If R i can be bounded by a constant, then by the Lipschitzness of σ and Lemma C.1, the gradient of γ and β in layer i can also be bounded by a constant. Note that DISPLAYFORM5 Thus γ and β in layer i can be bounded by a constant since DISPLAYFORM6 Also Lemma C.1 and the Lipschitzness of σ imply that R i−1 can be bounded if R i and γ in the layer i can be bounded by a constant. Using a simple induction, we can prove the existence of K for bounding the norm of g t 2 for all time t. Theorem C.3. If g 0 2 is bounded by a constant, then Φ satisfies the assumptions in Section 2.4.Proof. Let C be the set of parameters θ satisfying g ≤ K and w (i) 2 = 1 for all 1 ≤ i ≤ m. By Lemma C.2, C contains the set ofθ associated with the points lying between each pair of θ t and θ t+1 (including the endpoints).It is easy to show that F z (θ) is twice continously differentiable. Since C is compact, by the Extreme Value Theorem, there must exist such constants L In this section, we provide experimental evidence showing that the auto rate-tuning behavior does empower BN in the optimization aspect. We trained a modified version of VGGNet on Tensorflow. This network has 2 × conv64, pooling, 3 × conv128, pooling, 3 × conv256, pooling, 3 × conv512, pooling, 3 × conv512, pooling, fc512, fc10 layers in order. Each convolutional layer has kernel size 3 × 3 and stride 1. ReLU is used as the activation function after each convolutional or fullyconnected layer. We add a BN layer right before each ReLU. We set = 0 in each BN, since we observed that the network works equally well for being 0 or an small number (such as 10 −3, the default value in Tensorflow). We initialize the parameters according to the default configuration in Tensorflow: all the weights are initialized by Glorot uniform initializer BID7; β and γ in BN are initialized by 0 and 1, respectively. In this network, every kernel is scale-invariant, and for every BN layer except the last one, the concatenation of all β and γ parameters in this BN is also scale-invariant. Only β and γ parameters in the last BN are scale-variant (See Section 2.1). We consider the training in following two settings:1. Train the network using the standard SGD (No momentum, learning rate decay, weight decay and dropout);2. Train the network using Projected SGD (PSGD): at each iteration, one first takes a step proportional to the negative of the gradient calculated in a random batch, and then projects each scale-invariant parameter to the sphere with radius equal to its 2-norm before this iteration, i.e., rescales each scale-invariant parameter so that each maintains its length during training. Note that the projection in Setting 2 removes the adaptivity of the learning rates in the corresponding intrinsic optimization problem, i.e., Gt in equation 9 remains constant during the training. Thus, by comparing Setting 1 and Setting 2, we can know whether or not the auto-tuning behavior of BN shown in theory is effective in practice. The relationship between the training loss and the learning rate. For learning rate larger than 10, the training loss of PSGD or SGD with BN removed is always either very large or NaN, and thus not invisible in the figure. Left: The average training loss of the last 5 epochs (averaged across 10 experiments). In rare cases, the training loss becomes NaN in the experiments for the green curve (SGD, BN removed) with learning rate larger than 10 −0.7. We removed such data when taking the average. Right: The average training loss of each epoch (each curve stands for a single experiment). The relationship between the test accuracy and the learning rate. Left: The average test accuracy of the last 5 epochs (averaged across 10 experiments). Right: The test accuracy after each epoch (each curve stands for a single experiment). Due to the implementation of Tensorflow, outputing NaN leads to a test accuracy of 10%. Note that the magenta dotted curve (PSGD, lr=100), red dashed curve (SGD, BN removed, lr=1) and cyan dashed curve (SGD, BN removed, lr=10) are covered by the magenta dashed curve (SGD, BN removed, lr=100). They all have 10% test accuracy. As in our theoretical analysis, we consider what will happen if we set two learning rates separately for scale-invariant and scale-variant parameters. We train the network in either setting with different learning rates ranging from 10 −2 to 10 2 for 100 epochs. First, we fix the learning rate for scale-variant ones to 0.1, and try different learning rates for scaleinvariant ones. As shown in FIG3, for small learning rates (such as 0.1), the training processes of networks in Setting 1 and 2 are very similar. But for larger learning rates, networks in Setting 1 can still converge to 0 for all the learning rates we tried, while networks in Setting 2 got stuck with relatively large training loss. This suggests that the auto-tuning behavior of BN does takes effect when the learning rate is large, and it matches with the claimed effect of BN in that BN enables us to use a higher learning rate. Though our theoretical analysis cannot be directly applied to the network we trained due to the non-smoothness of the loss function, the experiment match with what we expect in our analysis. Next, we consider the case in which we train the network with a unified learning rate for both scaleinvariant and scale-variant parameters. We also compare Setting 1 and 2 with the setting in which we train the network with all the BN layers removed using SGD (we call it Setting 3).As shown in FIG4, the training loss of networks in Setting 1 converges to 0. On the contrast, the training loss of networks in Setting 2 and 3 fails to converge to 0 when a large learning rate is used, and in some cases the loss diverges to infinity or NaN. This suggests that the auto-tuning behavior of BN has an effective role in the case that a unified learning rate is set for all parameters. For a fair comparison, we also trained neural networks in Setting 3 with initialization essentially equivalent to the ones with BN. This is done in the same way as (Krähenbühl et al., 2015;) and Section 3 of : we first randomly initialize the parameters, then feed the first batch into the network and adjust the scaling and bias of each neuron to make its outputs have zero mean and unit variance. In this way, the loss of the networks converges to 0 when the learning rate is smaller than 10 −2.0, but for a slightly larger learning rate such as 10 −1.8, the loss fails to converge to 0, and sometimes even diverges to infinity or NaN. Compared with experimental in Setting 1, this suggests that the robustness of training brought by BN is independent of the fact that BN changes the effective initialization of parameters. Despite in Setting 1 the convergence of training loss for different learning rates, the convergence points can be different, which lead to different performances on test data. In FIG5, we plot the test accuracy of networks trained in Setting 1 and 2 using different unified learning rates, or separate learning rates with the learning rate for scale-variant parameters fixed to 0.1. As shown in the FIG5, the test accuracy of networks in Setting 2 decreases as the learning rate increases over 0.1, while the test accuracy of networks in Setting 1 remains higher than 75%. The main reason that the network in Setting 2 doesn't perform well is underfitting, i.e. the network in Setting 2 fails to fit the training data well when learning rate is large. This suggests that the autotuning behavior of BN also benefits generalization since such behavior allows the algorithm to pick learning rates from a wider range while still converging to small test error.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkxQ-nA9FX
We give a theoretical analysis of the ability of batch normalization to automatically tune learning rates, in the context of finding stationary points for a deep learning objective.
Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale. We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator. We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fréchet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600. Figure 1: Selected frames from videos generated by a DVD-GAN trained on Kinetics-600 at 256×256, 128 × 128, and 64 × 64 resolutions (top to bottom). Modern deep generative models can produce realistic natural images when trained on high-resolution and diverse datasets (; ; ; ;). Generation of natural video is an obvious further challenge for generative modeling, but one that is plagued by increased data complexity and computational requirements. For this reason, much prior work on video generation has revolved around relatively simple datasets, or tasks where strong temporal conditioning information is available. We focus on the tasks of video synthesis and video prediction (defined in Section 2.1), and aim to extend the strong of generative image models to the video domain. Building upon the state-of-the-art BigGAN architecture , we introduce an efficient spatio-temporal decomposition of the discriminator which allows us to train on Kinetics-600 -a complex dataset of Figure 2: Generated video samples with interesting behavior. In raster-scan order: a) On-screen generated text with further lines appearing.. b) Zooming in on an object. c) Colored detail from a pen being left on paper. d) A generated camera change and return. natural videos an order of magnitude larger than other commonly used datasets. The ing model, Dual Video Discriminator GAN (DVD-GAN), is able to generate temporally coherent, high-resolution videos of relatively high fidelity (Figure 1). Our contributions are as follows: • We propose DVD-GAN -a scalable generative model of natural video which produces high-quality samples at resolutions up to 256 × 256 and lengths up to 48 frames. • We achieve state of the art for video synthesis on UCF-101 and prediction on Kinetics-600. • We establish class-conditional video synthesis on Kinetics-600 as a new benchmark for generative video modeling, and report DVD-GAN as a strong baseline. The exact formulation of the video generation task can differ in the type of conditioning signal provided. At one extreme lies unconditional video synthesis where the task is to generate any video following the training distribution. Another extreme is occupied by strongly-conditioned models, including generation conditioned on another video for content transfer , per-frame segmentation masks (a), or pose information (; b;). In the middle ground there are tasks which are more structured than unconditional generation, and yet are more challenging from a modeling perspective than strongly-conditional generation (which gets a lot of information about the generated video through its input). The objective of class-conditional video synthesis is to generate a video of a given category (e.g., "riding a bike") while future video prediction is concerned with generation of continuing video given initial frames. These problems differ in several aspects, but share a common requirement of needing to generate realistic temporal dynamics, and in this work we focus on these two problems. Generative Adversarial Networks (GANs) are a class of generative models defined by a minimax game between a Discriminator D and a Generator G. The original objective was proposed by , and many improvements have since been suggested, mostly targeting improved training stability (; ; ; ;). We use the hinge formulation of the objective which is optimized by gradient descent (ρ is the elementwise ReLU function): GANs have well-known limitations including a tendency towards limited diversity in generated samples (a phenomenon known as mode collapse) and the difficulty of quantitative evaluation due to the lack of an explicit likelihood measure over the data. Despite these downsides, GANs have produced some of the highest fidelity samples across many visual domains . Kinetics is a large dataset of 10-second high-resolution YouTube clips originally created for the task of human action recognition. We use the second iteration of the dataset, Kinetics-600 , which consists of 600 classes with at least 600 videos per class for a total of around 500,000 videos. 1 Kinetics videos are diverse and unconstrained, which allows us to train large models without being concerned with the overfitting that occurs on small datasets with fixed objects interacting in specified ways . Among prior work, the closest dataset (in terms of subject and complexity) which is consistently used is UCF-101 . We focus on Kinetics-600 because of its larger size (almost 50x more videos than UCF-101) and its increased diversity (600 instead of 101 classes -not to mention increased intra-class diversity). Nevertheless for comparison with prior art we train on UCF-101 and achieve a state-of-the-art Inception Score there. Kinetics contains many artifacts expected from YouTube, including cuts (as in Figure 2d), title screens and visual effects. Except when specifically described, we choose frames with stride 2 (meaning we skip every other frame). This allows us to generate videos with more complexity without incurring higher computational cost. To the best of our knowledge we are the first to consider generative modelling of the entirety of the Kinetics video dataset 2, although a small subset of Kinetics consisting of 4,000 selected and stabilized videos (via a SIFT + RANSAC procedure) has been used in at least two prior papers . Due to the heavy pre-processing and stabilization present, as well as the sizable reduction in dataset size (two orders of magnitude) we do not consider these datasets comparable to the full Kinetics-600 dataset. Designing metrics for measuring the quality of generative models (GANs in particular) is an active area of research . In this work we report the two most commonly used metrics, Inception Score (IS) and Fréchet Inception Distance (FID) . The standard instantiation of these metrics is intended for generative image models, and uses an Inception model for image classification or feature extraction. For videos, we use the publicly available Inflated 3D Convnet (I3D) network trained on Kinetics-600 . Our Fréchet Inception Distance is therefore very similar to the Fréchet Video Distance (FVD) , although our implementation is different and more aligned with the original FID metric. 3 More details are in Appendix A.4. Our primary contribution is Dual Video Discriminator GAN (DVD-GAN), a generative video model of complex human actions built upon the state-of-the-art BigGAN architecture while introducing scalable, video-specific generator and discriminator architectures. An overview of the DVD-GAN architecture is given in Figure 3 and a detailed description is in Appendix A.2. Unlike some of the prior work, our generator contains no explicit priors for foreground, or motion (optical flow); instead, we rely on a high-capacity neural network to learn this in a data-driven manner. While DVD-GAN contains sequential components (RNNs), it is not autoregressive in time or in space. In other words, the pixels of each frame do not directly depend on other pixels in the video, as would be the case for auto-regressive models or models generating one frame at a time. Generating long and high resolution videos is a heavy computational challenge: individual samples from Kinetics-600 (just 10 seconds long) contain upwards of 16 million pixels which need to be generated in a consistent fashion. This is a particular challenge to the discriminator. For example, a generated video might contain an object which leaves the field of view and incorrectly returns with a different color. Here, the ability to determine this video is generated is only possible by comparing two different spatial locations across two (potentially distant) frames. Given a video with length T, height H, and width W, discriminators that process the entire video would have to process all H × W × T pixels -limiting the size of the model and the size of the videos being generated. DVD-GAN tackles this scale problem by using two discriminators: a Spatial Discriminator D S and a Temporal Discriminator D T. D S critiques single frame content and structure by randomly sampling k full-resolution frames and judging them individually. We use k = 8 and discuss this choice in Section 4.3. D S's final score is the sum of the per-frame scores. The temporal discriminator D T must provide G with the learning signal to generate movement (something not evaluated by D S). To make the model scalable, we apply a spatial downsampling function φ(·) to the whole video and feed its output to D T. We choose φ to be 2 × 2 average pooling, and discuss alternatives in Section 4.3. This in an architecture where the discriminators do not process the entire video's worth of pixels, since D S processes only k × H × W pixels and For a 48 frame video at 128 × 128 resolution, this reduces the number of pixels to process per video from 786432 to 327680: a 58% reduction. Despite this decomposition, the discriminator objective is still able to penalize almost all inconsistencies which would be penalized by a discriminator judging the entire video. D T judges any temporal discrepancies across the entire length of the video, and D S can judge any high resolution details. The only detail the DVD-GAN discriminator objective is unable to reflect is the temporal evolution of pixels within a 2 × 2 window. We have however not noticed this affecting the generated samples in practice. DVD-GAN's D S is similar to the per-frame discriminator D I in MoCoGAN . However MoCoGAN's analog of D T looks at full resolution videos, whereas D S is the only source of learning signal for high-resolution details in DVD-GAN. For this reason, D S is essential when φ is not the identity, unlike in MoCoGAN where the additional per-frame discriminator is less crucial. Generative video modeling is a widely explored problem which includes work on VAEs (; ; ;) and recurrent models (b; ; c;), auto-regressive models (; ; ;), normalizing flows , and GANs (; ; ;). Much prior work considers decompositions which model the texture and spatial consistency of objects separately from their temporal dynamics. One approach is to split G into foreground and models , while another considers explicit or implicit optical flow or motion in either G or D . Other methods decompose the generator (or encoder) to treat concepts like pose, content and motion separately from one another (; a). Similar to DVD-GAN, MoCoGAN discriminates individual frames in addition to a discriminator which operates on fixed-length K-frame slices of the whole video (where K < T). Though this potentially reduces the number of pixels to discriminate to describes discriminating sliding windows, which increases the total number of pixels. Other models follow this approach by discriminating groups of frames (; ;). TGANv2 proposes "adaptive batch reduction" for efficient training, an operation which randomly samples subsets of videos within a batch and temporal subwindows within each video. This operation is applied throughout TGANv2's G, with heads projecting intermediate feature maps directly to pixel space before applying batch reduction, and corresponding discriminators evaluating these lower resolution intermediate outputs. An effect of this choice is that TGANv2 discriminators only evaluate full-length videos at very low resolution. We show in Figure 6 that a similar reduction in DVD-GAN's resolution when judging full videos leads to a loss in performance. We expect further reduction (towards the resolution at which TGANv2 evaluates the entire length of video) to lead to further degradation of DVD-GAN's quality. Furthermore, this method is not easily adapted towards models with large batch sizes divided across a number of accelerators, with only a small batch size per replica. A detailed description of our training setup is in Appendix A.3. Each DVD-GAN was trained on TPU pods using between 32 and 512 replicas with an Adam optimizer. Video Synthesis models are trained for around 300,000 learning steps, whilst Video Prediction models are trained for up to 1,000,000 steps. Most models took between 12 and 96 hours to train. Our primary concern the problem of Video Synthesis. We provide our for the UCF-101 and Kinetics-600 datasets. With Kinetics-600 emerging as a new benchmark for generative video modelling, our establish a strong baseline for future work. In Table 1 we show the main of this paper: benchmarks for Class-Conditional Video Synthesis on Kinetics-600. In this regime, we train a single DVD-GAN on all classes of Kinetics-600, supplying per-sample class information to both G and D. We consider a range of resolutions and video lengths, and measure Inception Score and Fréchet Inception Distance (FID) for each (as described in Section 2.4). We further measure each model along a truncation curve, which we carry out by calculating FID and IS statistics while varying the standard deviation of the latent vectors between 0 and 1. There is no prior work with which to quantitatively compare these (for comparative experiments see Section 4.1.2 and Section 4.2.1), but we believe these samples to show a level of fidelity not yet achieved in datasets as complex as Kinetics-600 (see samples from each row in Appendix D.1). Because all videos are resized for the I3D network (to 224 × 224), it is meaningful to compare metrics across equal length videos at different resolutions. Neither IS nor FID are comparable across videos of different lengths, and should be treated as separate metrics. Generating longer and larger videos is a more challenging modeling problem, which is conveyed by the metrics (in particular, comparing 12-frame videos across 64 × 64, 128 × 128 and 256 × 256 resolutions). Nevertheless, DVD-GAN is able to generate plausible videos at all resolutions and with length spanning up to 4 seconds (48 frames). As can be seen in Appendix D.1, smaller videos display high quality textures, object composition and movement. At higher resolutions, generating coherent objects becomes more difficult (movement consists of a much larger number of pixels), but high-level details of the generated scenes are still extremely coherent, and textures (even complicated ones like a forest backdrop in Figure 1a) are generated well. It is further worth noting that the 48-frame models do not see more high resolution frames than the 12-frame model (due to the fixed choice of k = 8 described in Section 3.1), yet nevertheless learn to generate high resolution images. We further verify our by testing the same model on UCF-101 , a smaller dataset of 13,320 videos of human actions across 101 classes that has previously been used for video synthesis and prediction (; ;). Our model produces samples with an IS of 27.38, significantly outperforming the state of the art (see Table 2 for quantitative comparison and Appendix B.1 for more details). 11.85 ±.07 MoCoGAN 12.42 ±.03 ProgressiveVGAN 14.56 ±.05 TGANv2 24.34 ±.35 DVD- GAN (ours) 27.38 ± 0.53 Future Video Prediction is the problem of generating a sequence of frames which directly follow from one (or a number) of initial conditioning frames. Both this and video synthesis require G to learn to produce realistic scenes and temporal dynamics, however video prediction further requires G to analyze the conditioning frames and discover elements in the scene which will evolve over time. In this section, we use the Fréchet Video Distance exactly as: using the logits of an I3D network trained on Kinetics-400 as features. This allows for direct comparison to prior work. Our model, DVD-GAN-FP (Frame Prediction), is slightly modified to facilitate the changed problem, and details of these changes are given in Appendix A.5. For direct comparison with concurrent work on autoregressive video models we consider the generation of 11 frames of Kinetics-600 at 64 × 64 resolution conditioned on 5 frames, where the videos for training are not taken with any frame skipping. We show for all these cases in Table 4. Our frame-conditional model DVD-GAN-FP outperforms the prior work on frame-conditional prediction for Kinetics. The final row labeled DVD-GAN corresponds to 16-frame class-conditional Video Synthesis samples, generated without frame conditioning and without frame skipping. The FVD of this video synthesis model is notably better. On the one hand, we hypothesize that the synthesis model has an easier generative task: it can choose to generate (relatively) simple samples for each class, rather than be forced to continue frames taken from videos which are class outliers, or contain more complicated details. On the other hand, a certain portion of the FID/FVD metric undoubtedly comes from the distribution of objects and s present in the dataset, and so it seems that the prediction model should have a handicap in the metric by being given the ground truth distribution of s and objects with which to continue videos. The synthesis model's improved performance on this task seems to indicate that the advantage of being able to select videos to generate is greater than the advantage of having a ground truth distribution of starting frames. This is un-intuitive, as the frame conditional model has access to strictly more information about the data distribution it is trying to recover compared to the synthesis model (despite the fact that the two models are being trained by an identical objective). This experiment favors the synthesis model for FVD, but we highlight that other models or other metrics might produce the opposite ordering. We analyze several choices for k (the number of frames per sample in the input to D S) and φ (the downsampling function for D T). We expect setting φ to the identity or k = T to in the best model, but we are interested in the maximally compressive k and φ that reduce discriminator input size (and the amount of computation), while still producing a high quality generator. For φ, we consider: 2 × 2 and 4 × 4 average pooling, the identity (no downsampling), as well as a φ which takes a random half-sized crop of the input video (as in). Results can be seen in Figure 6. For each ablation, we train three identical DVD-GANs with different random initializations on 12-frame clips of Kinetics-600 at 64 × 64 resolution for 100,000 steps. We report mean and standard deviation (via the error bars) across each group for the whole training period. For k, we consider 1, 2, 8 and 10 frames. We see diminishing effect as k increases, so settle on k = 8. We note the substantially reduced IS of 4 × 4 downsampling as opposed to 2 × 2, and further note that taking half-sized crops (which in the same number of pixels input to D T as 2 × 2 pooling) is also notably worse. We approached the challenging problem of modeling natural video by introducing a GAN capable of capturing the complexity of a large video dataset. We showed that on UCF-101 and frame-conditional Kinetics-600 it quantitatively achieves the new state of the art, alongside qualitatively producing video synthesis samples with high complexity and diversity. We further wish to emphasize the benefit of training generative models on large and complex video datasets, such as Kinetics-600, and envisage the strong baselines we established on this dataset with DVD-GAN will be used as a reference point by the generative modeling community moving forward. While much remains to be done before realistic videos can be consistently generated in an unconstrained setting, we believe DVD-GAN is a step in that direction. A EXPERIMENT METHODOLOGY For all datasets we randomly shuffle the training set for each model replica independently. Experiments on the BAIR Robot Pushing dataset are conducted in the native resolution of 64 × 64, where for UCF-101 we operate at a (downsampled) 128 × 128 resolution. This is done by a bilinear resize such that the video's smallest dimension is mapped to 128 pixels while maintaining aspect ratio (144 for UCF-101). From this we take a random 128-pixel crop along the other dimension. We use the same procedure to construct datasets of different resolutions for Kinetics-600. All three datasets contain videos with more frames than we generate, so we take a random sequence of consecutive frames from the resized output. For UCF-101, we augmented the dataset by randomly performing left-right flips with probability 0.5. Our model adopts many architectural choices from including our nomenclature for describing network width, which is determined by the product of a channel multiplier ch with a constant for each layer in the network. The layer-wise constants for G are for 64 × 64 videos and for 128 × 128. The width of the i-th layer is given by the product of ch and the i-th constant and all layers prior to the residual network in G use the initial layer's multiplier and we refer to the product of that and ch as ch 0. ch in DVD-GAN is 128 for videos with 64 × 64 resolution and 96 otherwise. The corresponding ch lists for both D T and D S are for 64 × 64 resolution and for 128 × 128. The input to G consists of a Gaussian latent noise z ∼ N (0, I) and a learned linear embedding e(y) of the desired class y. Both inputs are 120-dimensional vectors. G starts by computing an affine transformation of [z; e(y)] to a [4, 4, ch 0]-shaped tensor (in Figure 3 this is represented as a 1 × 1 convolution). [z; e(y)] is used as the input to all class-conditional Batch Normalization layers throughout G (the gray line in Figure 7). This is then treated as the input (at each frame we would like to generate) to a Convolutional Gated Recurrent Unit whose update rule for input x t and previous output h t−1 is given by the following: In these equations σ and ρ are the elementwise sigmoid and ReLU functions respectively, the n operator represents a convolution with a kernel of size n × n, and the operator is an elementwise multiplication. Brackets are used to represent a feature concatenation. This RNN is unrolled once per frame. The output of this RNN is processed by two residual blocks (whose architecture is given by Figure 7). The time dimension is combined with the batch dimension here, so each frame proceeds through the blocks independently. The output of these blocks has width and height dimensions which are doubled (we skip upsampling in the first block). This is repeated a number of times, with the output of one RNN + residual group fed as the input to the next group, until the output tensors have the desired spatial dimensions. We do not reduce over the time dimension when calculating Batch Normalization statistics. This prevents the network from utilizing the Batch Normalization layers to pass information between timesteps. The spatial discriminator D S functions almost identically to BigGAN's discriminator, though an overview of the residual blocks is given in Figure 7 for completeness. A score is calculated for each of the uniformly sampled k frames (we default to k = 8) and the D S output is the sum over per-frame scores. The temporal discriminator D T has a similar architecture, but pre-processes the real or generated video with a 2 × 2 average-pooling downsampling function φ. Furthermore, the first two residual blocks of D T are 3-D, where every convolution is replaced with a 3-D convolution with a kernel size of 3 × 3 × 3. The rest of the architecture follows BigGAN . Sampling from DVD-GAN is very efficient, as the core of the generator architecture is a feed-forward convolutional network: two 64 × 64 48-frame videos can be sampled in less than 150ms on a single TPU core. The dual discriminator D is updated twice for every update of G and we use Spectral Normalization for all weight layers (approximated by the first singular value) and orthogonal initialization of weights . Sampling is carried out using the exponential moving average of G's weights, which is accumulated with decay γ = 0.9999 starting after 20,000 training steps. The model is optimized using Adam with batch size 512 and a learning rate of 1 · 10 −4 and 5 · 10 −4 for G and D respectively. Class conditioning in D is projection-based whereas G relies on class-conditional Batch Normalization (; ;): equivalent to standard Batch Normalization without a learned scale and offset, followed by an elementwise affine transformation where each parameter is a function of the noise vector and class conditioning. The FID we use for Synthesis on Kinetics-600 is calculated exactly as Fréchet Video Distance except that we use a different feature network: an I3D trained on Kinetics-600 (as opposed to the network trained on Kinetics-400 in FVD) and features from the final hidden layer instead of the logits. This metric can be implemented as a small change from the publically available FVD code by changing the name of the TF-Hub module to'https://tfhub.dev/deepmind/i3d-kinetics-600/1' and loading the tensor named'RGB/inception_i3d/Logits/AvgPool3D' from the ing graph. In order to provide on future video prediction problems we describe a simple modification to DVD-GAN to facilitate the added conditioning. A diagram of the extended model is in Figure 8. Each row is a separate video, the leftmost column is a (true) conditioning frame. Given C conditioning frames, our modified DVD-GAN-FP passes each frame separately through a deep residual network identical to D S. The (near) symmetric design of G and D S's residual blocks mean that each output from a D-style residual block has a corresponding intermediate tensor in G of the same spatial resolution. After each block the ing features for each conditioning frame are stacked in the channel dimension and passed through a 3 × 3 convolution and ReLU activation. The ing tensor is used as the initial state for the Convolutional GRU in the corresponding block in G. Note that the frame conditioning stack reduces spatial resolution while G increases resolution. Therefore the smallest features of the conditioning frames (which have been through the most layers) are input earliest in G and the larger features (which have been through less processing) are input to G towards the end. D T operates on the concatenation of the conditioning frames and the output of G, meaning that it does not receive any extra information detailing that the first C frames are special. However to reduce wasted computation we do not sample the first C frames for D S on real or generated data. This technically means that D S will never see the first few frames from real videos at full resolution, but this was not an issue in our experiments. Finally, our video prediction variant does not condition on any class information, allowing us to directly compare with prior art. This is achieved by settling the class id of all samples to 0. 4 This evaluation is performed by re-scaling the video to 128 × 128, normalizing the input features based on mean statistics of the ground truth dataset, then taking a 112 × 112 center crop and applying C3D. Our model produces samples with an IS of 27.38, significantly outperforming the state of the art (see Table 2). The DVD-GAN architecture on UCF-101 is identical to the model used for Kinetics, and is trained on 16-frame 128 × 128 clips from UCF-101. The lack of class information does hurt the performance of DVD-GAN, and training on UCF-101 with class labels leads to an improved model with an Inception Score of 32.97. This is directly comparable to which achieved an IS of 15.83 and is close to the IS reported for the ground truth data (34.49). However we note than many more recent video generation papers do not test in this regime. It is worth mentioning that our improved score is, at least partially, due to memorization of the training data. In Figure 10 we show interpolation samples from our best UCF-101 model. Like interpolations in Appendix D.2, we sample 2 latents (left and rightmost columns) and show samples from the linear interpolation in latent space along each row. Here we show 4 such interpolations (the first frame from each video). Unlike Kinetics-600 interpolations, which smoothly transition from one sample to the other, we see abrupt jumps in the latent space between highly distinct samples, and little intra-video diversity between samples in each group. It can be further seen that some generated samples highly correlate with samples from the training set. We show this both as a failure of the Inception Score metric, the commonly reported value for classconditional video synthesis on UCF-101, but also as strong signal that UCF-101 is not a complex or diverse enough dataset to facilitate interesting video generation. Each class is relatively small, and reuse of clips from shared underlying videos means that the intra-class diversity can be restricted to just a handful of videos per class. This suggests the need for larger, more diverse and challenging datasets for generative video modelling, and we believe that Kinetics-600 provides a better benchmark for this task. Here we detail a number of modifications or miscellaneous we experimented with which did not produce a conclusive . • We experimented with several variations of normalization which do not require calculating statistics over a batch of data. Group Normalization performed best, almost on a par with (but worse than) Batch Normalization. We further tried Layer Normalization , Instance Normalization (, and no normalization, but found that these significantly underperformed Batch Normalization. • We found that removing the final Batch Normalization in G, which occurs after the ResNet and before the final convolution, caused a catastrophic failure in learning. Interestingly, just removing the Batch Normalization layers within G's residual blocks still led to good (though slightly worse) generative models. In particular, variants without Batch Normalization in the residual blocks often achieve significantly higher IS (up to 110.05 for 64 × 64 12 frame samples -twice normal). But these models had substantially worse FID scores (1.22 for the aforementioned model) -and produced qualitatively worse video samples. • Early variants of DVD-GAN contained Batch Normalization which normalized over all frames of all batch elements. This gave G an extra channel to convey information across time. It took advantage of this, with the being a model which required batch statistics in order to produce good samples. We found that the version which normalizes over timesteps independently worked just as well and without the dependence on statistics. • Models based on the residual blocks of BigGAN-deep trained faster (in wall clock time) but slower with regards to metrics, and struggled to reach the accuracy of models based on BigGAN's residual blocks. It is difficult to accurately convey complicated generated video through still frames. Where provided, we recommend readers view the generated videos themselves via the provided links. We refer to videos within these batches by row/column number where the video in the 0th row and column is in the top left corner. We expect G to produce samples of higher quality from latents near the mean of the distribution (zero). This is the idea behind the Truncation Trick . Like BigGAN, we find that DVD-GAN is amenable to truncation. We also experiment with interpolations in the latent space and in the class embedding. In both cases, interpolations are evidence that G has learned a relatively smooth mapping from the latent space to real videos: this would be impossible for a network that has only memorized the training data, or which is only capable of generating a few exemplars per class. Note that while all latent vectors along an interpolation are valid (and therefore G should produce a reasonable sample), at no point during training is G asked to generate a sample halfway between two classes. Nevertheless G is able to interpolate between even very distinct classes. Figure 17: An example intra-class interpolation. Each column is a separate video (the vertical axis is the time dimension). The left and rightmost columns are randomly sampled latent vectors and are generated under a shared class. Columns in between represent videos generated under the same class across the linear interpolation between the two random samples. Note the smooth transition between videos at all six timesteps displayed here. Figure 18: An example of class interpolation. As before, each column is a sequence of timesteps of a single video. Here, we sample a single latent vector, and the left and rightmost columns represent generating a video of that latent under two different classes. Columns in between represent videos of that same latent generated across an interpolation of the class embedding. Even though at no point has DVD-GAN been trained on data under an interpolated class, it nevertheless produces reasonable samples.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byx91R4twB
We propose DVD-GAN, a large video generative model that is state of the art on several tasks and produces highly complex videos when trained on large real world datasets.
Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives. Understanding procedural text such as instructions or stories requires anticipating the implicit causal effects of actions on entities. For example, given instructions such as "add blueberries to the muffin mix, then bake for one half hour," an intelligent agent must be able to anticipate a number of entailed facts (e.g., the blueberries are now in the oven; their "temperature" will increase). While this common sense reasoning is trivial for humans, most natural language understanding algorithms do not have the capacity to reason about causal effects not mentioned directly in the surface strings BID12 BID7 BID14. The process is a narrative of entity state changes induced by actions. In each sentence, these state changes are induced by simulated actions and must be remembered. In this paper, we introduce Neural Process Networks, a procedural language understanding system that tracks common sense attributes through neural simulation of action dynamics. Our network models interpretation of natural language instructions as a process of actions and their cumulative effects on entities. More concretely, reading one sentence at a time, our model attentively selects what actions to execute on which entities, and remembers the state changes induced with a recurrent memory structure. In FIG0, for example, our model indexes the "tomato" embedding, selects the "wash" and "cut" functions and performs a computation that changes the "tomato" embedding so that it can reason about attributes such as its "SHAPE" and "CLEANLINESS".Our model contributes to a recent line of research that aims to model aspects of world state changes, such as language models and machine readers with explicit entity representations BID4 BID6, as well as other more general purpose memory network variants BID30 BID26 BID5 BID23. This worldcentric modeling of procedural language (i.e., understanding by simulation) abstracts away from the surface strings, complementing text-centric modeling of language, which focuses on syntactic and semantic labeling of surface words (i.e., understanding by labeling).Unlike previous approaches, however, our model also learns explicit action representations as functional operators (See FIG0 . While representations of action semantics could be acquired through an embodied agent that can see and interact with the world BID22, we propose to learn these representations from text. In particular, we require the model to be able to explain the causal effects of actions by predicting natural language attributes about entities such as "LOCATION" and "TEMPERATURE". The model adjusts its representations of actions based on errors it makes in predicting the ant state changes to attributes. This textual simulation allows us to model aspects of action causality that are not readily available in existing simulation environments. Indeed, most virtual environments offer limited aspects of the world -with a primary focus on spatial relations BID22 BID1 BID29 . They leave out various other dimensions of the world states that are implied by diverse everyday actions such as "dissolve" (change of "COMPOSITION") and "wash" (change of "CLEANLINESS").Empirical demonstrate that parametrizing explicit action embeddings provides an inductive bias that allows the neural process network to learn more informative context representations for understanding and generating natural language procedural text. In addition, our model offers more interpretable internal representations and can reason about the unstated causal effects of actions explained through natural language descriptors. Finally, we include a new dataset with fine-grained annotations on state changes, to be shared publicly, to encourage future research in this direction. The neural process network is an interpreter that reads in natural language sentences, one at a time, and simulates the process of actions being applied to relevant entities through learned representations of actions and entities. The main component of the neural process network is the simulation module (§2.5), a recurrent unit whose internals simulate the effects of actions being applied to entities. A set of V actions is known a priori and an embedding is initialized for each one, F = {f 1, ...f V}. Similarly, a set of I entities is known and an embedding is initialized for each one: E = {e 1, ... e I}. Each e i can be considered to encode information about state attributes of that entity, which can be extracted by a set of state predictors (§2.6). As the model reads text, it "applies" action embeddings to the entity vectors, thereby changing the state information encoded about the entities. For any document d, an initial list of entities I d is known and E d = {e i |i ∈ I d} ⊂ E entity state embeddings are initialized. As the neural process network reads a sentence from the document, it selects a subset of both F (§2.3) and E d (§2.4) based on the actions performed and entities affected in the sentence. The entity state embeddings are changed by the action and the new embeddings are used to predict end states for a set of state changes (§2.6). The prediction error for end states is backpropagated to the action embeddings, learning action representations that model the simulation of desired causal effects on entities. This process is broken down into five modules below. Unless explicitly defined, all W and b variables are parametrized linear projections and biases. We use the notation {e i} t when referring to the values of the entity embeddings before processing sentence s t. Given a sentence s t, a Gated Recurrent Unit encodes each word and outputs its last hidden vector as a sentence encoding h t. Given h t from the sentence encoder, the action selector (bottom left in Fig. 2 beets", both f wash and f cut must be selected. To account for multiple actions, we make a soft selection over F, yielding a weighted sum of the selected action embeddingsf t : DISPLAYFORM0 where MLP is a parametrized feed-forward network with a sigmoid activation and w p ∈ R V is the attention distribution over V possible actions ( §3.1). We compose the action embedding by taking the weighted average of the selected actions. Sentence Attention Given h t from the sentence encoder, the entity selector chooses relevant entities using a soft attention mechanism: DISPLAYFORM0 where W 2 is a bilinear mapping, e i0 is a unique key for each entity (§2.5), and d i is the attention weight for entity embedding e i. For example, in "wash and cut beets and carrots", the model should select e beet and e carrot. Recurrent Attention While sentence attention would suffice if entities were always explicitly mentioned, natural language often elides arguments or uses referent pronouns. As such, the module must be able to consider entities mentioned in previous sentences. Usingh t, the model computes a soft choice over whether to choose affected entities from this step's attention d i or the previous step's attention distribution. DISPLAYFORM1 where c ∈ R 3 is the choice distribution, a it−1 is the previous sentence's attention weight for each entity, a it is the final attention for each entity, and 0 is a vector of zeroes (providing the option to not change any entity). Prior entity attentions can propagate forward for multiple steps. Entity Memory A unique state embedding e i is initialized for every entity i in the document. A unique key to index each embedding e i0 is set as the initial value of the embedding BID4 BID17. After the model reads s t, it modifies {e i} t to reflect changes influenced by actions. At every time step, the entity memory receives the attention weights from the entity selector, normalizes them and computes a weighted average of the relevant entity state embeddings: DISPLAYFORM0 Applicator Given the action summary embeddingf t and the entity summary embeddingē t, the applicator (middle right in Fig. 2) applies the selected actions to the selected entities, and outputs the new proposal entity embedding k t. DISPLAYFORM1 where W 4 is a third order tensor projection. The vector k t is the new representation of the entityē t after the applicator simulates the action being applied to it. Entity Updater The entity updater interpolates the new proposal entity embedding k t and the set of current entity embeddings {e i} t: DISPLAYFORM2 yielding an updated set of entity embeddings {e i} t+1. Each embedding is updated proportional to its entity's unnormalized attention a i, allowing the model to completely overwrite the state embedding for any entity. For example, in the sentence "mix the flour and water," the embeddings for e f lour and e water must both be overwritten by k t because they no longer exist outside of this new composition. Given the new proposal entity embedding k t, the state predictor (bottom right in Fig. 2) predicts changes to the ing entity embedding k t along the following six dimensions: location, cookedness, temperature, composition, shape, and cleanliness. Discrete multi-class classifiers, one for each dimension, take in k t and predict a unique end state for their corresponding state change type: DISPLAYFORM0 For location changes, which require contextual information to predict the end state, k t is concatenated with the original sentence representation h t to predict the final state. In this work we focus on physical action verbs in cooking recipes. We manually collect a set of 384 actions such as cut, bake, boil, arrange, and place, organizing their causal effects along the following predefined dimensions: LOCATION, COOKEDNESS, TEMPERATURE, SHAPE, CLEANLI-NESS and COMPOSITION. The textual simulation operated by the model induces state changes along these dimensions by applying actions functions from the above set of 384. For example, cut entails a change in SHAPE, while bake entails a change in TEMPERATURE, COOKEDNESS, and even LO-CATION. We annotate the state changes each action induces, as well as the end state of the action, using Amazon Mechanical Turk. The set of possible end states for a state change can range from 2 for binary state changes to more than 200 (See Appendix C for details). For learning and evaluation, we use a subset of the Now You're Cooking dataset BID9. We chose 65816 recipes for training, 175 recipes for development, and 700 recipes for testing. For the development and test sets, crowdsourced workers densely annotate actions, entities and state changes that occur in each sentence so that we can tune hyperparameters and evaluate on gold evaluation sets. Annotation details are provided in Appendix C.3. The neural process network is trained by jointly optimizing multiple losses for the action selector, entity selector, and state change predictors. Importantly, our training scheme uses weak supervision because dense annotations are prohibitively expensive to acquire at a very large scale. Thus, we heuristically extract verb mentions from each recipe step and assign a state change label based on the state changes induced by that action (§3.1). Entities are extracted similarly based on string matching between the instructions and the ingredient list. We use the following losses for training: Action Selection Loss Using noisy supervision, the action selector is trained to minimize the cross-entropy loss for each possible action, allowing multiple actions to be chosen at each step if multiple actions are mentioned in a sentence. The MLP in the action selector (Eq. 1) is pretrained. Entity Selection Loss Similarly, to train the attentive entity selector, we minimize the binary cross-entropy loss of predicting whether each entity is affected in the sentence. State Change Loss For each state change predictor, we minimize the negative loglikelihood of predicting the correct end state for each state change. Coverage Loss An underlying assumption in many narratives is that all entities that are mentioned should be important to the narrative. We add a loss term that penalizes narratives whose combined attention weights for each entity does not sum to more than 1. DISPLAYFORM0 where a it is the attention weight for a particular entity at sentence t and I d is the number of entities in a document. S t=1 a it is upper bounded by 1. This is similar to the coverage penalty used in neural machine translation BID28. We evaluate our model on a set of intrinsic tasks centered around tracking entities and state changes in recipes to show that the model can simulate preliminary dynamics of the recipe task. Additionally, we provide a qualitative analysis of the internal components of our model. Finally, we evaluate the quality of the states encoded by our model on the extrinsic task of generating future steps in a recipe. In the tracking task, we evaluate the model's ability to identify which entities are selected and what changes have been made to them in every step. We break the tracking task into two separate evalua- Metrics In the entity selection test, we report the F1 score of choosing the correct entities in any step. A selected entity is defined as one whose attention weight a i is greater than 50% (§2.4). Because entities may be harder to predict when they have been combined with other entities (e.g., the mixture may have a new name), we also report the recall for selecting combined (CR) and uncombined (UR) entities. In the end state prediction test, we report how often the model correctly predicts the state change performed in a recipe step and the ant end state. This score is then scaled by the accuracy of predicting which entities were changed in that same step. We report the average F1 and accuracy across the six state change types. Baselines We compare our models against two baselines. First, we built a GRU model that is trained to predict entities and state changes independently. This can be viewed as a bare minimum network with no action representations or recurrent entity memory. The second baseline is a Recurrent Entity Network BID4 with changes to fit our task. First, the model can tie memory cells to a subset of the full list of entities so that it only considers entities that are present in a particular recipe. Second, the entity distribution for writing to the memory cells is re-used when we query the memory cells. The normalized weighted average of the entity cells is used as the input to the state predictors. The unnormalized attention when writing to each cell is used to predict selected entities. Both baselines are trained with entity selection and state change losses (§3.3).Ablations We report on six ablations. First, we remove the recurrent attention (Eq. 3). The model only predicts entities using the current encoder hidden state. In the second ablation, the model is trained with no coverage penalty (Eq. 9). The third ablation prunes the connection from the action selector w p to the entity selector (Eq. 2). We also explore not pretraining the action selector. Finally, we look at two ablations where we intialize the action embeddings with vectors from a skipgram model. In the first, the model operates normally, and in the second, we do not allow gradients to backpropagate to the action embeddings, updating only the mapping tensor W 4 instead (Eq. 6). The generation task tests whether our system can produce the next step in a recipe based on the previous steps that have been performed. The model is provided all of the previous steps as context. We report the combined BLEU score and ROUGE score of the generated sequence relative to the reference sequence. Each candidate sequence has one reference sentence. Both metrics are computed at the corpus-level. Also reported are "VF1", the F1 score for the overlap of the actions performed in the reference sequence and the verbs mentioned in the generated sequence, and "SF1", the F1 score for the overlap of end states annotated in the reference sequence and predicted by the generated sequences. End states for the generated sequences are extracted using the lexicon from Section 3.1 based on the actions performed in the sentence. Setup To apply our model to the task of recipe step generation, we input the context sentences through the neural process network and record the entity state vectors once the entire context has st Let cool. selected oats, sugar, flour, corn syrup, milk, vanilla extract, salt correct oats, sugar, flour, corn syrup, milk, vanilla extract, salt Good st−1 In a large saucepan over low heat, melt marshmallows.st Add sprinkles, cereal, and raisins, stir until well coated. selected marshmallows, cereal, raisins correct marshmallows, cereal, raisins, sprinkles Bad st−3 Ladle the barbecue sauce around the crust and spread. st−2 Add mozzarella, yellow cheddar, and monterey jack cheese. st−1 Next, add onion mixture and sliced chicken breast.st Top pizza with jalapeno peppers. selected jalapenos correct crust, sauce, mozzarella, cheddar, monterey jack, white onion, chicken, jalapenos Bad st−2 Combine 1 cup flour, salt, and 1 tbsp sugar. st−1 Cut in butter until mixture is crumbly, then sprinkle with vinegar.st Gather dough into a ball and press into bottom of 9 inch springform pan. selected butter, vinegar correct flour, salt, sugar, butter, vinegar TAB5: Examples of the model selecting entities for sentence s t. The previous sentences are provided as context in cases where they are relevant. been read (§2.5). These vectors can be viewed as a snapshot of the current state of the entities once the preceding context has been simulated inside the neural process network. We encode these vectors using a bidirectional GRU and take the final time step hidden state e I. A different GRU encodes the context words in the same way (yielding h T) and the first hidden state input to the decoder is computed using the projection function: DISPLAYFORM0 where • is the Hadamard product between the two encoder outputs. All models are trained by minimizing the negative loglikelihood of predicting the next word for the full sequence. Implementation details can be found in Appendix A. Baselines For the generation task, we use three baselines: a seq2seq model with no attention, an attentive seq2seq model BID0, and a similar variant as our NPN generator, except where the entity states have been computed by the Recurrent Entity Network (EntNet) baseline (§4.1). Implementation details for baselines can be found in Appendix B. Entity Selection As shown in Table 8, our full model outperforms all baselines at selecting entities, with an F1 score of 55.39%. The ablation study shows that the recurrent attention, coverage loss, action connections and action selector pretraining improve performance. Our success at predicting entities extends to both uncomposed entities, which are still in their raw forms (e.g., melt the butter → butter), and composed entities, in which all of the entities that make up a composition must be selected. For example, in a Cooking lasagna recipe, if the final step involves baking the prepared lasagna, the model must select all the entities that make up the lasagna (e.g., lasagna sheets, beef, tomato sauce). In as compositional entities (Ex. 1, 3), and elided arguments over long time windows (Ex. 2). We also provide examples where the model fails to select the correct entities because it does not identify the mapping between a reference construct such as "pizza" (Ex. 4) or "dough" (Ex. 5) and the set of entities that composes it, showcasing the difficulty of selecting the full set for a composed entity. State Change Tracking In Table 8, we show that our full model outperforms competitive baselines such as Recurrent Entity Networks BID4 and jointly trained GRUs. While the ablation without the coverage loss shows higher accuracy, we attribute this to the fact that it predicts a smaller number of total state changes. Interestingly, initializing action embeddings with skipgram vectors and locking their values shows relatively high performance, indicating the potential gains in using powerful pretrained representations to represent actions. Action Embeddings In our model, each action is assigned its own embedding, but many actions induce similar changes in the physical world (e.g.,"cut" and "slice"). After training, we compute the pairwise cosine similarity between each pair of action embeddings. In TAB6, we see that actions that perform similar functions are neighbors in embedding space, indicating the model has captured certain semantic properties of these actions. Learning action representations through the state changes they induce has allowed the model to cluster actions by their transformation functions. Entity Compositions When individual entities are combined into new constructs, our model averages their state embeddings (Eq. 5), applies an action embedding to them (Eq. 6), and writes them to memory (Eq. 7). The state embeddings of entities that are combined should be overwritten by the same new embedding. In Figure 3, we present the percentage increase in cosine similarity for state embeddings of entities that are combined in a sentence (blue) as opposed to the percentage increase for those that are not (red bars). While the soft attention mechanism for entity selection allows similarities to leak between entity embeddings, our system is generally able to model the compositionality patterns that from entities being combined into new constructs. Step Generation Our in TAB7 indicate that sequences generated using the neural process network entity states as additional input yield higher scores than competitive baselines. The entity states allow the model to predict next steps conditioned on a representation of the world being simulated by the neural process network. Additionally, the higher VF1 and SF1 scores indicate that the model is indeed using the extra information to better predict the actions that should follow the context provided. Example generations for each baselines from the dev set are provided in Table 6, Context Preheat oven to 425 degrees. Reference Melt butter in saucepan and mix in bourbon, thyme, pepper, and salt. NPN Melt butter in skillet. Seq2seq Lightly grease 4 x 8 baking pan with sunflower oil. Attentive Seq2seq Combine all ingredients and mix well. EntNet In a large bowl, combine flour, baking powder, baking soda, salt, and pepper. Context Pour egg mixture over caramelized sugar in cake pan. Place cake pan in large shallow baking dish. Bake for 55 minutes or until knife inserted into flan comes out clean. Reference Cover and chill at least 8 hours. NPN Refrigerate until ready to use. Seq2seq Serve at room temperature. Attentive Seq2seq Store in an airtight container. EntNet Store in an airtight container. Context Cut squash into large pieces and steam. Remove cooked squash from shells; Reference Measure 4 cups pulp and reserve remainder for another dish. NPN Drain. Seq2seq Mash pulp with a fork. Attentive Seq2seq Set aside. EntNet Set aside. Table 6: Examples of the model generating sentences compared to baselines. The context and reference are provided first, followed by our model's generation and then the baseline generationsshowing that the NPN generator can use information about ingredient states to reason about the most likely next step. The first and second examples are interesting as it shows that the NPN-aware model has learned to condition on entity state -knowing that raw butter will likely be melted or that a cooked flan must be refrigerated. The third example is also interesting because the model learns that cooked vegetables such as squash will sometimes be drained, even if it is not relevant to this recipe because the squash is steamed. The seq2seq and EntNet baselines, meanwhile, output reasonable sentences given the immediate context, but do not exhibit understanding of global patterns. Recent studies in machine comprehension have used a neural memory component to store a running representation of processed text BID30 BID26 BID5 BID23. While these approaches map text to memory vectors using standard neural encoder approaches, our model, in contrast, directly interprets text in terms of the effects actions induce in entities, providing an inductive bias for learning how to represent stored memories. More recent work in machine comprehension also sought to couple the memory representation with tracking entity states BID4. Our work seeks to provide a relatively more structured representation of domain-specific action knowledge to provide an inductive bias to the reasoning process. Neural Programmers BID20 have also used functions to simulate reasoning, by building a model to select rows in a database and applying operation on those selected rows. While their work explicitly defined the effect of a number of operations for those rows, we provide a framework for learning representations for a more expansive set of actions, allowing the model to learn representations for how actions change the state space. Works on instructional language studied the task of building discrete graph representations of recipes using probabilistic models BID8 BID19 BID18. We propose a complementary new model by integrating action and entity relations into the neural network architecture and also address the additional challenge of tracking the state changes of the entities. Additional work in tracking states with visual or multimodal context has focused on 1) building graph representations for how entities change in goal-oriented domains BID3 BID24 or 2) tracking visual state changes based on decisions taken by agents in environment simulators such as videos or games BID1 BID29 BID22. Our work, in contrast, models state changes in embedding space using only text-based signals to map real-world actions to algebraic transformations. We introduced the Neural Process Network for modeling a process of actions and their causal effects on entities by learning action transformations that change entity state representations. The model maintains a recurrent memory structure to track entity states and is trained to predict the state changes that entities undergo. Empirical demonstrate that our model can learn the causal effects of action semantics in the cooking domain and track the dynamic state changes of entities, showing advantages over competitive baselines. A TRAINING DETAILS OF OUR FULL MODEL AND ABLATIONS The hidden size of the instruction encoder is 100, the embedding sizes of action functions and entities are 30. We use dropout with a rate of 0.3 before any non-recurrent fully connected layers BID25. We use the Adam optimizer BID11 ) with a learning rate of.001 and decay by a factor of 0.1 if we see no improvement on validation loss over three epochs. We stop training early if the development loss does not decrease for five epochs. The batch size is 64. We use two instruction encoders, one for the entity selector, and one for the action selector. Word embeddings and entity embeddings are initialized with skipgram embeddings BID15;b) using a word2vec model trained on the training set. We use a vocabulary size of 7358 for words, and 2996 for entities. Gradients with respect to the coverage loss (Eq. 9) are only backpropagated in steps where no entity is annotated as being selected. To account for the false negatives in the training data due to the heuristic generation of the labels, gradients with respect to the entity selection loss are zeroed when no entity label is present. The hidden size of the context encoder is 200. The hidden size of the state vector encoder is 200. State vectors have dimensionality 30 (the same as in the neural process network). Dropout of 0.3 is used during training in the decoder. The context and state representations are projected jointly using an element-wise product followed by a linear projection BID10. Both encoders and the decoder are single layer. The learning rate is 0.0003 initially and is halved every 5 epochs. The model is trained with the Adam optimizer. Joint Gated Recurrent Unit The hidden state of the GRU is 100. We use a dropout with a rate of 0.3 before any non-recurrent fully connected layers. We use the Adam optimizer with a learning rate of.001 and decay by a factor of 0.1 if we see no improvement on validation loss over a single epoch. We stop training early if the development loss does not decrease for five epochs. The batch size is 64. We use encoders, one for the entity selector, and one for the state change predictors. Word embeddings are initialized with skipgram embeddings using a word2vec model trained on the training set. We use a vocabulary size of 7358 for words. Recurrent Entity Networks Memory cells are tied to the entities in the document. For a recipe with 12 ingredients, 12 entity cells are initialized. All hyperparameters are the same as the in the bAbI task from BID4. The learning rate start at 0.01 and is halved every 25 epochs. Entity cells and word embeddings are 100 dimensional. The encoder is a multiplicative mask initialized the same as in BID4. Intermediate supervision from the weak labels is provided to help predict entities. A separate encoder is used for computing the attention over memory cells and the content to write to the memory. Dropout of 0.3 is used in the encoders. The batch size is 64. We use a vocabulary size of 7358 for words, and 2996 for entities. Seq2seq The encoder and decoder are both single-layer GRUs with hidden size 200. We use dropout with probability 0.3 in the decoder. We train with the Adam optimizer starting with a learning rate 0.0003 that is halved every 5 epochs. The encoder is bidirectional. The model is trained to minimize the negative loglikelihood of predicting the next word. The encoder is the same as in the seq2seq baseline. A multiplicative attention between the decoder hidden state and the context vectors is used to compute the attention over the context at every decoder time step. The model is trained with the same learning rate, learning schedule and loss function as the seq2seq baseline. The model is trained in the same way as the NPN generator model in Appendix A.2 except that the state representations used as input are produced from by EntNet baseline described in Section 4.1 and Appendix B.1. We provide workers with a verb, its definition, an illustrative image of the action, and a set of sentences where the verb is mentioned. Workers are provided a checklist of the six state change types and instructed to identify which of them the verb causes. They are free to identify multiple changes. Seven workers annotate each verb and we assign a state change based on majority vote. Of the set of 384 verbs extracted, only 342 have a state change type identified with them. Of those, 74 entail multiple state change types. We give workers a verb, a state change type, and an example with the verb and ask them to provide an end state for the ingredient the verb is applied to in the example. We then use the answers to manually aggregate a set of end states for each state change type. These end states are used as labels when the model predicting state changes. For example, a LOCATION change might lead to an end state of "pan," "pot", or "oven." End states for each state change type are provided in Annotators are instructed to note any entities that undergo one of the six state changes in each step, as well as to identify new combinations of ingredients that are created. For example, the sentence "Cut the tomatoes and add to the onions" would involve a SHAPE change for the tomatoes and a combination created from the "tomatoes" and "onions". In a separate task, three workers are asked to identify the actions performed in every sentence of the development and test set recipes. If an action receives a majority vote that it is performed, it is included in the annotations. Table 8: Results for entity selection and state change selection on the development set when randomly dropping a percentage of the training labels
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJYFzMZC-
We propose a new recurrent memory architecture that can track common sense state changes of entities by simulating the causal effects of actions.
There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In many applications this expensive initialization is not practical, for example streaming algorithms --- where inputs are ephemeral and can only be inspected a small number of times. In this paper we explore the learning of approximate set membership over a stream of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which we show to be more compressive than Bloom Filters and several existing memory-augmented neural networks in scenarios of skewed data or structured sets. One of the simplest questions one can ask of a set of data is whether or not a given query is contained within it. Is q, our query, a member of S, our chosen set of observations? This set membership query arises across many computing domains; from databases, network routing, and firewalls. One could query set membership by storing S in its entirety and comparing q against each element. However, more space-efficient solutions exist. The original and most widely implemented approximate set membership data-structure is the Bloom Filter BID2. It works by storing sparse distributed codes, produced from randomized hash functions, within a binary vector. The Bloom-filter trades off space for an allowed false positive rate, which arises due to hash collisions. However its error is one-sided; if an element q is contained in S then it will always be recognized. It never emits false negatives. One can find Bloom Filters embedded within a wide range of production systems; from network security BID16, to block malicious IP addresses; databases, such as Google's Bigtable BID7, to avoid unnecessary disk lookups; cryptocurrency BID19, to allow clients to filter irrelevant transactions; search, such as Facebook's typeahead search BID0, to filter pages which do not contain query prefixes; and program verification BID13, to avoid recomputation over previously observed states. While the main appeal of Bloom Filters is favourable compression, another important quality is the support for dynamic updates. New elements can be inserted in O time. This is not the case for all approximate set membership data structures. For example, perfect hashing saves ≈ 40% space over Bloom Filters but requires a pre-processing stage that is polynomial-time in the number of elements to store BID12. Whilst the static set membership problem is interesting, it limits the applicability of the algorithm. For example, in a database application that is serving a high throughput of write operations, it may be intractable to regenerate the full data-structure upon each batch of writes. We thus focus on the data stream computation model BID27, where input observations are assumed to be ephemeral and can only be inspected a constant number of timesusually once. This captures many real-world applications: network traffic analysis, database query serving, and reinforcement learning in complex domains. Devising an approximate set membership data structure that is not only more compressive than Bloom Filters, but can be applied to either dynamic or static sets, could have a significant performance impact on modern computing applications. In this paper we investigate this problem using memory-augmented neural networks and meta-learning. We build upon the recently growing literature on using neural networks to replace algorithms that are configured by heuristics, or do not take advantage of the data distribution. For example, Bloom Filters are indifferent to the data distribution. They have near-optimal space efficiency when data is drawn uniformly from a universe set BID5 (maximal-entropy case) but (as we shall show) are sub-optimal when there is more structure. Prior studies on this theme have investigated compiler optimization BID11, computation graph placement, and data index structures such as b-trees BID22.In the latter work, BID22 explicitly consider the problem of static set membership. By training a neural network over a fixed S (URLs from Google's Transparency Report) with negative examples in the form of held-out URLs, they observe 36% space reduction over a conventional Bloom Filter 1. Crucially this requires iterating over the storage set S a large number of times to embed its salient information into the weights of a neural network classifier. For a new S this process would have to be repeated from scratch. Instead of learning from scratch, we draw inspiration from the few-shot learning advances obtained by meta-learning memory-augmented neural networks BID30 BID34. In this setup, tasks are sampled from a common distribution and a network learns to specialize to (learn) a given task with few examples. This matches very well to applications where many Bloom Filters are instantiated over different subsets of a common data distribution. For example, a Bigtable database usually contains one Bloom Filter per SSTable file. For a large table that contains Petabytes of data, say, there can be over 100, 000 separate instantiated data-structures which share a common row key format and query distribution. Meta-learning allows us to exploit this common redundancy. The main contributions of this paper are A new sparse memory-augmented neural network architecture, the Neural Bloom Filter, which learns to write to memory using a distributed write scheme, and An empirical evaluation of the Neural Bloom Filter meta-learned on one-shot approximate set membership problems of varying structure. We compare with the classical Bloom Filter alongside other memory-augmented neural networks such as the Differentiable Neural Computer and Memory Networks BID33. We find when there is no structure, that differentiates the query set elements and queries, the Neural Bloom Filter learns a solution similar to a Bloom Filter derivative (a Bloom-g filter BID28), but when there is a lot of structure the solution can be considerably more space-efficient. The problem of exact set membership is to state whether or not a given query q belongs to a set of n distinct observations S = {x 1, . . ., x n} where x i are drawn from a universe set U. By counting the number of distinct subsets of size n it can be shown that any such exact set membership tester requires at least log 2 |U | n bits of space. To mitigate the space dependency on |U |, which can be prohibitively large, one can relax the constraint on perfect correctness. Approximate set membership allows for a false positive rate of at most. Specifically we answer q ∈ A(S) where A(S) ⊇ S and p(q ∈ A(S) − S) ≤. It can be shown 2 the space requirement for approximate set membership of uniformly sampled observations is at least n log 2 bits BID5 which can be achieved with perfect hashing. So for a false positive rate of 1%, say, this amounts to 6.6 bits per element. In contrast to storing raw or compressed elements this can be a huge space saving, for example ImageNet images require 108 KB per image on average when compressed with JPEG, an increase of over four orders of magnitude. The Bloom Filter BID2 is a data structure which solves the dynamic approximate set membership problem with near-optimal space complexity. It assumes access to k uniform hash functions DISPLAYFORM0 m is a binary string of length m which is initialized to zero. Writes are performed by hashing an input x to k locations in M and setting the corresponding bits to 1, M [h i (x)] ← 1; i = 1,..., k. For a given query q the Bloom Filter returns true if all corresponding hashed locations are set to 1 and returns false otherwise: DISPLAYFORM1. This incurs zero false negatives, as any previously observed input must have enabled the corresponding bits in M, however there can be false positives due to hash collisions. To achieve a false positive rate of with minimal space one can set k = log 2 (1/) and m = n log 2 (1/) log 2 e, where e is Euler's number. The ing space is a factor of log 2 e ≈ 1.44 from the optimal static lower bound given by BID5. Recurrent neural networks such as LSTMs retain a small amount of memory via the recurrent state. However this is usually tied to the number of trainable parameters in the model. There has been recent interest in augmenting neural networks with a larger external memory. The method for doing so, via a differentiable write and read interface, was first popularized by the Neural Turing Machine (NTM) BID17 and its successor the Differentiable Neural Computer (DNC) in the context of learning algorithms, and by Memory Networks BID33 in the context of question answering. Memory Networks store embeddings of the input in separate rows of a memory matrix M. Reads are performed via a differentiable content-based addressing operation. Given a query embedding q we take some similarity measure D (e.g. cosine similarity, or negative euclidean distance) against each row in memory and apply a softmax to obtain a soft address vector a ∝ e D(q,M). A read is then a weighted sum over memory r ← a T M. The NTM and DNC use the same content-based read mechanism, but also learns to write. These models can arbitrate whether to write to slots in memory with similar content (content-based writes), temporally ordered locations, or unused memory. When it comes to capacity, there has been consideration to scaling both the DNC and Memory Networks to very large sizes using sparse read and write operations BID29 BID6. However another way to increase the capacity is to increase the amount of compression which occurs in memory. Memory Nets can create compressive representations of each input, but cannot compress jointly over multiple inputs because they are hard-wired to write one slot per timestep. The NTM and DNC can compress over multiple slots in memory because they can arbitrate writes across multiple locations, but in practice seem to choose very sharp read and write addresses. The Kanerva Machine BID36 tackles memory-wide compression using a distributed write scheme to jointly compose and compress its memory contents. The model uses content-based addressing over a separate learnable addressing matrix A, instead of the memory M, and thus learns where to write. We take inspiration from this scheme. One approach to learning set membership in one-shot would be to use a recurrent neural network, such as an LSTM or DNC. Here, the model sequentially ingests the N elements to store, answers a set of queries using the final state, and is trained by BPTT. Whilst this is a general training approach, and the model may learn a compressive solution, it does not scale well to larger number of elements. Even when N = 1000, backpropagating over a sequence of this length induces computational and optimization challenges. For larger values this quickly becomes intractable. Alternatively one could store an embedding of each element x i ∈ S in a slot-based Memory Network. This is more scalable as it avoids BPTT, because the gradients of each input can be calculated in parallel. However Memory Networks are not a space efficient solution (as shown in in Section 5) because there is no joint compression of inputs. We propose a memory model, the Neural Bloom Filter, that is both compressive and scalable. The network uses a purely additive write operation -i.e. no multiplicative gating or squashing -to update a real-valued memory matrix. An additive write operation can be seen as a continuous relaxation of the the Bloom Filter's logical OR write operation. The benefit of a purely additive write is that the ing memory does not depend upon the ordering of the input (as addition is commutative) and the gradient with respect to each input can be computed in parallel. The network addresses memory by classifying which memory slots to read or write to via a softmax, conditioned purely on the input. We can think of this as a continuous relaxation of the Bloom Filter's hash function. To make the addressing more efficient for larger memory sizes, we sparsify the softmax by preserving only the top k components. When using a sparse address, we find the network can fixate on a subset of rows. This observation is common to prior sparse addressing work BID31. We find sphering, often dubbed whitening, the addressing activations prior to the softmax (see Appendix F for an ablation) remedies this. Finally we make the final linear layer (denoted A) of the addressing softmax to be non-trainable. This is to ensure the number of trainable model parameters is independent of memory size. It also allows the memory to be resized at test-time. The full architecture depicted in FIG0 consists of a controller network which encodes the input to an embedding z ← f enc (x) and transforms this to a write word w ← f w (z) and a query s ← f q (z). An addressing network takes s and performs a sphering transformation to produce a query q with decorrelated dimensions. We used a moving average ZCA transformation. We choose ZCA over other decorrelation approaches as this worked well in practice, e.g. over PCA. We describe the exact moving zca transformation in full in Supplementary A.1. The address a ← σ k (q T A) is computed over memory via content-based attention between q and a non-learnable address matrix A. Here, σ k denotes a sparse softmax where the top k similarities are retained. The addressing matrix A ∼ N (0, I) is chosen to be a fixed sample of Gaussian random variables. It simply represents a linear map from queries to memory addresses; we keep it non-trainable to avoid the coupling of model parameters and memory capacity. A write is performed by running the controller and addressing network, and then additively writing w to the top k addresses in a, M t+1 ← M t + wa T. This scheme is inspired by the Bloom Filter, replacing a logical OR for an addition, and it ensures that we do not have to backpropagatethrough-time (BPTT) over sequential writes. A read is performed by also running the controller and addressing network and retrieving the top k addresses from M. These rows in memory are multiplied element-wise with the address weight r ← M a and are fed through an MLP with the residual inputs w and z. We found this to be more powerful than the conventional read operation r ← a T M used by the DNC and Memory Networks, as it allows for non-linear interactions between rows in memory at the time of read. An overview of the operations is described below. Controller network Write DISPLAYFORM0 To give an example network configuration, in our experiments we chose f enc to be a 3-layer CNN in the case of image input, and in the case of text input we choose f enc to be an LSTM with 128 hidden units. We chose f w and f q to be an MLP with a single hidden layer of size 128, and f out to be a 3-layer mlp with residual connections. We used leaky relus as the non-linearity. We further discuss the motivation for decorrelation / sphering in the addressing network, and the model's relation to uniform hashing in Appendix A.2. We also discuss how the model could be implemented for O(log m) time reads and writes, where m is the size of memory, and with an O network space overhead by avoiding the storage of A, the addressing matrix, in Appendix A.3. In this section we discuss space lower bounds for the approximate set membership problem when there is some structure to the storage or query set. This can help us formalize why and where neural networks may be able to beat classical lower bounds to this problem. The n log 2 (1/) lower bound from BID5 assumes that all subsets S ⊂ U of size n, and all queries q ∈ U have equal probability. Whilst it is instructive to bound this maximum-entropy scenario, which we can think of as'worst case', most applications of approximate set membership e.g. web cache sharing, querying databases, or spell-checking, involve sets and queries that are not sampled uniformly. For example, the elements within a given set may be highly dependent, there may be a power-law distribution over queries, or the queries and sets themselves may not be sampled independently. A more general space lower bound can be defined by an information theoretic argument from communication complexity BID37. Namely, approximate set membership can be framed as a two-party communication problem between Alice, who observes the set S and Bob, who observes a query q. They can agree on a shared policy Π in which to communicate. For given inputs S, q they can produce a transcript A S,q = Π(S, q) ∈ Z which can be processed g: et al. shows that the maximum transcript size is greater than the mutual information between the inputs and transcript: max S,q |A S,q | ≥ I(S, q; A S,q) = H(S, q) − H(S, q|A S,q). Thus we note problems where we may be able to use less space than the classical lower bound are cases where the entropy H(S, q) is small, e.g. our sets are highly non-uniform, or cases where H(S, q|A S,q) is large, which signifies that many query and set pairs can be solved with the same transcript. DISPLAYFORM0 Algorithm 1 Meta-Learning Training 1: while training iteration i < budget do Calculate query outputs DISPLAYFORM1 Calculate XE loss: DISPLAYFORM2 Calculate dL/dθ (backpropagate through queries and writes) 10:Update parameters θ i+1 ← Optimizer(θ i, dL/dθ) Our experiments aim to explore scenarios inspired by real-world scenarios where there are varying levels of structure in the storage sets S and queries q. We consider four neural architectures, the LSTM, DNC, Memory Network, and Neural Bloom Filter. An encoder architecture is shared across all models. For images we use a 3-layer convnet with a kernel size of 3 and 64 filters. For text we used a character LSTM with 128 hidden units. The training setup follows the memory-augmented meta-learning training scheme of BID34, only here the task is familiarity classification versus image classification. The network samples tasks which involve classifying familiarity for a given storage set in one-shot. Meta-learning occurs as a two-speed process, where the model quickly learns to recognize a given storage set S within a training episode via writing to a memory or state, and the model slowly learns to improve this fast-learning process by optimizing the model parameters θ over multiple tasks. We detail the training routine in Algorithm 1. DISPLAYFORM0 Failed at task for larger no. items For the RNN baselines (LSTM and DNC) the write operation corresponds to unrolling the network over the inputs and outputting the final state. For these models, the query network is simply an MLP classifier which receives the concatenated final state and query, and outputs a scalar logit. For the Memory Network the read and write operations are defined from the model, and for the Neural Bloom Filter the read and write operations are defined in Section 3. We compared the space (in bits) of the model's memory (or state) to a Bloom Filter at a given false positive rate. The false positive rate is measured empirically over a sample of queries for the learned models; for the Bloom Filter we employ the analytical false positive rate. Beating a Bloom Filter's space usage with the analytical false positive rate implies better performance for any given Bloom Filter library version (as actual Bloom Filter hash functions are not uniform), thus the comparison is fair. For each model we sweep over hyper-parameters relating to model size to obtain their smallest operating size at the desired false positive rate (for the full set, see Appendix D). Because the neural models can emit false negatives, we store these in a (ideally small) backup bloom filter, as proposed by BID22 BID25. We account for the space of this backup bloom filter, and add it to the space usage of the model's memory for parity (See Appendix B for further discussion). Thus the neural network must learn to output a small state in one-shot that can serve set membership queries at a given false positive rate, and emit a small enough number of false negatives such that the backup filter is also small -and the total size is considerably less than a Bloom Filter. We chose three simple set membership tasks that have a graded level of structure and use images as an input type, namely images from MNIST. The input modality is not of principle importance, one could have designed similar tasks with textual input for example, but it is interesting to see how the model operate on images before moving to text. We experiment with three different levels of inherent structure to the sampling of sets and queries, however crucially all problems are approximate set membership tasks that can be solved by a Bloom Filter. They do not require generalization or familiarity to the sensitivity of a particular precision. Class-based familiarity, each set of images is sampled with the constraint that they arise from the same randomly-sampled class. Non-uniform instance-based familiarity, the images are sampled without replacement from an exponential distribution. Uniform instance-based familiarity, where each subset contains images sampled uniformly without replacement. See Appendix E for further details on the task setup. In the class-based sampling task we see in FIG2 we see that the DNC, LSTM and Neural Bloom Filter are able to significantly outperform the classical Bloom Filter when images are sampled by class. The Memory Network is able to solve the task with a word size of only 2, however this corresponds to a far greater number of bits per element, 64 versus the Bloom Filter's 9.8 (to a total size of 4.8kb), and so the overall size was prohibitive. The DNC, LSTM, and Neural Bloom Filter are able to solve the task with 500 elements at 1.1kb, 217b, and 382b; a 4.3×, 22×, and 12× saving respectively. For the non-uniform sampling task in FIG2 we see the Bloom Filter is preferable for less than 500 stored elements, but is overtaken thereafter. At 1000 elements the DNC, LSTM, and Neural Bloom Filter consume 7.9kb, 7.7kb, and 6.8kb respectively which corresponds to a 17.6%, 19.7%, and 28.6% reduction over the 9.6kb Bloom Filter. In the uniform sampling task shown in FIG2, there is no structure to the sampling of S. The two architectures which rely on BPTT essentially fail to solve the task at some threshold of storage size. The Neural Bloom Filter solves it with 6.8kb (using a memory size of 50 and word size of 2). If the Neural Bloom Filter were trained at low precision -say 5 bits per element which is a demonstrated capacity of RNN activations BID10 -the overall space would be 5.8kb; still 20% larger than the Bloom Filter. The overall from these sets of experiments is that the classical Bloom Filter cannot be matched (or beaten) by a meta-learned memory-augmented neural network when there is no structure to the data, however in the case of imbalanced data, or highly dependent sets that share common attributes, we do see significant space savings. We wanted to understand whether the Neural Bloom Filter is able to learn'where' to store inputs in memory, as well as what to store. In other memory-augmented neural networks, such as the DNC and Memory Networks, there is no direct correspondence between the contents of an input and where it should be organized in memory. The Neural Bloom Filter has the ability do this, but it may not choose to in practice. We investigated this for the MNIST class-based familiarity task, giving the model 10 memory slots, each with a word size of 2 and a top k addressing of k = 3. We inspect three trained models; the full model, an ablated model where the write words are fixed to be a constant w ← 1, and an ablated model with w ← 1 and a linear read operator r ← a T M. We visualize the average write address and memory contents broken down by class. The full model, shown in FIG3 learns to place some classes in particular slots, e.g. class 1 → slot 5, however most are very distributed. Inspecting the memory contents it is clear the write word encodes a unique 2D token for each class. This solution bears resemblance with Bloom-g Filters BID28 where elements are spread across a smaller memory with the same hashing scheme as Bloom Filters, but a unique token is stored in each slot instead of a constant 1-bit value. With the model ablated to store only 1s in FIG3 we see it chooses to allocate semantic addressing codes for some classes (e.g. 0 and 1) however for other classes it uses a distributed address. E.g. for class 3 the model prefers to uniformly spread its writes across memory slot 1, 4, and 8. The model solution is similar to that of Bloom Filters, with distributed addressing codes as a solution -but no information in the written words themselves. By inspecting the determinant of the average addresses, we find they are not linearly separable. Thus the output MLP f out computes a non-linearly separation. When we force the read operation to be linear in FIG3, this is no longer possible and the network learns to produce linearly separable addresses for each class. This solution has a correspondence with perfect hashing as now each (unobserved) input class will map to a unique slot in memory. We look at a task inspired by database interactions. NoSQL databases, such as Bigtable and Cassandra, use a single string-valued row key, which is used to index the data. The database is comprised of a union of files (e.g. SSTables) storing contiguous row key chunks. Bloom Filters are used to determine whether a given query q lies within the stored set. We emulate this setup by constructing a universe of strings, that is alphabetically ordered, and by sampling contiguous ranges (to represent a given SSTable). We choose unique tokens in GigaWord v5 news corpus to comprise as a universe, which consists of 2.5M words. We train models with up to 200 unique strings in a set, and extrapolate to larger sizes. We evaluate the models on a held-out test set. We see in FIG4 that the neural architectures perform best for larger false positive rates. For a storage size of 200 we observe a 6× space reduction (200b vs 1.2kb) when using the Neural Bloom Filter over the classical Bloom Filter. For smaller false positive rates, e.g. 0.1% this gap reduces to a 20% reduction (2.5kb vs 3kb) which decreases during the extrapolation phase. Interestingly the performance degradation of the LSTM is quite smooth between in-sample set sizes (< 200) and out-of-sample sizes (> 200). We hypothesize the LSTM is smoothly forgetting earlier inputs. Finally, we train the Neural Bloom Filter with a storage size of 5000 and compare it to Bloom Filters and Cuckoo Filters in Table 1, where we see 3 − 40× space reduction. Here the LSTM was not able to learn the task; optimizing insertions via BPTT over a sequence of length 5000 did not in a remotely usable solution. A storage size of 5000 is still small, but it is relevant to the NOSQL database-scenario where SSTables (say) are typically kept small. E.g. if the stored values were of size 100kB per row, we would expect 5000 unique keys in an average Bigtable SSTable. We furthermore considered the latency and throughput of such a learned model in this database setting. Both the query and insertion latencies of the Neural Bloom Filter are considerably higher than a classical Bloom Filter (14ms vs 20ns) however the maximum throughput of the Bloom Filter (≈ 60K QPS) can be matched by the neural variant when run on a GPU (NVIDIA P6000). The same is not true of an LSTM, which peaks at a maximum insertion throughput of 4K even when the model is placed on the GPU -due to its sequential write scheme. See Appendix F.1 for more details on the speed benchmarks. The large database task validates the choice of simple write scheme for the Neural Bloom Filter. The additive write can be successfully optimized during training over larger storage sets as there is no BPTT, and it is an order of magnitude faster at evaluation time -even matching the throughput of a Bloom Filter. There have been a large number of Bloom Filter variants published; from Counting Bloom Filters which support deletions BID15, Bloomier Filters which store functions vs sets BID8, Compressed Bloom Filters which use arithmetic encoding to compress the storage set BID24, and Cuckoo Filters which use cuckoo hashing to reduce redundancy within the storage vector BID14. Although some of these variants focus on better compression, they do not achieve this by specializing to the data distribution. One of the few works which do address data-dependence are Weighted Bloom Filters BID4 BID35. They work by modulating the number of hash functions used to store or query each input, dependent on its storage and query frequency. This requires estimating a large number of separate storage and query frequencies. The Neural Bloom Filter in contrast is a more general solution to non-uniform S sets. The encoder may be able to learn the statistics of the data distribution during the meta-learning phase and represent these in its parametric weights. However it can also take advantage of dependent sets where the entropy H(S) may be low but the frequency of each element is uniform (e.g. such as row keys in a database). proposes a neurally-inspired set membership data-structure that works by replacing the randomized hash functions with a randomly-wired computation graph of OR and AND gates. The false positive rate is controlled analytically by modulating the number of gates and the overall memory size. However there is no learning or specialization to the data with this setup. BID3 investigates a learnable neural familiarity module, which serves as a biologically plausible model of familiarity mechanisms in the brain, namely within the perirhinal cortex. However this has not shown to be empirically effective at exact matching. BID22 consider the use of a neural network to classify the membership of queries to a fixed set S. Here the network itself is more akin to a perfect hashing setup where multiple epochs are required to find a succinct holistic representation of the set, which is embedded into the weights of the network. In their case this search is performed by gradient-based optimization. We emulate their experimental comparison approach but instead propose a memory architecture that represents the set as activations in memory, versus weights in a network. Mitzenmacher (2018a) discusses the benefits and draw-backs of a learned bloom filter; distinguishing the empirical false positive rate over the distribution of sets S versus the conditional false positive rate of the model given a particular set S. In this paper we focus on the empirical false positive rate because we wish to exploit redundancy in the data and query distribution. Mitzenmacher (2018b) considers a different way to combine classical and learned bloom filters; by'sandwiching' the learned model with a pre-filter classical bloom filter and a post-filter classical bloom filter. This is seen to be more efficient for learned models with a larger false positive rate. In many situations neural networks are not a suitable replacement to Bloom Filters and their variants. The Bloom Filter is robust to changes in data distribution, and adversarial attacks, because it delivers a bounded false positive rate for any sampled subset, unlike a neural network. However in this paper we consider the questions, "When might a neural network provide better compression than a Bloom Filter?" and "What kind of neural architecture is practical?". We see that a model which uses an external memory with an adaptable capacity, avoids BPTT with a feed-forward write scheme, and learns to address its memory, is the most promising option in contrast to popular memory models such as DNCs and LSTMs. We term this model the Neural Bloom Filter due to the analogous incorporation of a hashing scheme, commutative write scheme, and multiplicative read mechanism. The Neural Bloom Filter relies on settings where cost of learning to query is possible and will be a net benefit to a population of existing bloom filters. That is, because we rely on meta-learning, we need situations where we have an off-line dataset (both of stored elements and queries) that is similar enough to future data that we wish to store. In the case of a large database we think this is warranted, a database with 100, 000 separate set membership data structures will benefit from a single (or periodic) meta-learning training routine that can run on a single machine and sample from the currently stored data, generating a large number of efficient data-structures. We envisage the space cost of the network to be amortized by sharing it across many neural bloom filters, and the time-cost of executing the network to be offset by the continuous acceleration of dense linear algebra on modern hardware, and the ability to batch writes and queries efficiently. A promising future direction would be to investigate the feasibility of this approach in a production system. A.1 MOVING ZCAThe moving ZCA was computed by taking moving averages of the first and second moment, calculating the ZCA matrix and updating a moving average projection matrix θ zca. This is only done during training, at evaluation time θ zca is fixed. We describe the update below for completeness. DISPLAYFORM0 Calculate singular values DISPLAYFORM1 In practice we do not compute the singular value decomposition at each time step to save computational resources, but instead calculate it and update θ every T steps. We scale the discount in this case η = η/T. We can think of the decorrelation of s, along with the sparse content-based attention with A, as a hash function that maps s to several indices in M. For moderate dimension sizes of s (256, say) we note that the Gaussian samples in A lie close to the surface of a sphere, uniformly scattered across it. If q, the decorrelated query, were to be Gaussian then the marginal distribution of nearest neighbours rows in A will be uniform. If we chose the number of nearest neighbours k = 1 then this implies the slots in M are selected independently with uniform probability. This is the exact hash function specification that Bloom Filters assume. Instead we use a continuous (as we choose k > 1) approximation (as we decorrelate s → q vs Gaussianize) to this uniform hashing scheme, so it is differentiable and the network can learn to shape query representations. We discuss some implementation tricks that could be employed for a production system. One can avoid the linear-time addressing operation σ k (q T A) by using an approximate k-nearest neighbour index, such as locality-sensitive hashing, to extract the nearest neighbours in A. The use of an approximate nearest neighbour index has been empirically considered for scaling memory-augmented neural networks BID29 BID21 however this was used for attention on M directly. As M is dynamic the knn requires frequent re-building as memories are stored or modified. This architecture is simpler -A is fixed and so the approximate knn can be built once. To ensure the size of the network (which can be shared across many memory instantiations) is independent of the number of slots in memory m we can avoid storing A. Because it is a fixed sample of random variables that are generated from a deterministic random number generator we can instead store a set of integer seeds that can be used to re-generate the rows of A. We can let the i-th seed c i, say represented as a 16-bit integer, correspond to the set of 16 rows with indices 16i, 16i + 1,..., 16i + 15. If these rows need to be accessed, they can be regenerated on-the-fly by c i. The total memory cost of A is thus m bits, where m is the number of memory slots 3.Putting these two together it is possible to query and write to a Neural Bloom Filter with m memory slots in O(log m) time, where the network consumes O space. It is worth noting, however, the Neural Bloom Filter's memory is often much smaller than the corresponding classical Bloom Filter's memory, and in many of our experiments is even smaller than the number of unique elements to store. Thus dense matrix multiplication can still be preferable -especially due to its acceleration on GPUs and TPUs BID20 -and a dense representation of A is not inhibitory. As model 3 One can replace 16 with 32 if there are more than one million slots optimization can become application-specific, we do not focus on these implementation details and use the model in its simplest setting with dense matrix operations. For each task we compare the model's memory size, in bits, at a given false positive rate -usually chosen to be 1%. For our neural networks which output a probability p = f (x) one could select an operating point τ such that the false positive rate is. In all of our experiments the neural network outputs a memory (state) s which characterizes the storage set. Let us say SPACE(f,) is the minimum size of s, in bits, for the network to achieve an average false positive rate of. We could compare SPACE(f,) with SPACE(Bloom Filter,) directly, but this would not be a fair comparison as our network f can emit false negatives. To remedy this, we employ the same scheme as BID22 where we use a'backup' Bloom Filter with false positive rate δ to store all false negatives. When f (x) < τ we query the backup Bloom Filter. Because the overall false positive rate is + (1 −)δ, to achieve a false positive rate of at most α (say 1%) we can set = δ = α/2. The number of elements stored in the backup bloom filter is equal to the number of false negatives, denoted n f n. Thus the total space can be calculated, TOTAL SPACE(f,α) = SPACE(f, DISPLAYFORM0 2). We compare this quantity for different storage set sizes. For the MNIST experiments we used a 3-layer convolutional neural network with 64 filters followed by a two-layer feed-forward network with 64&128 hidden-layers respectively. The number of trainable parameters in the Neural Bloom Filter (including the encoder) is 243, 437 which amounts to 7.8Mb at 32-bit precision. We did not optimize the encoder architecture to be lean, as we consider it part of the library in a sense. For example, we do not count the size of the hashing library that an implemented Bloom Filter relies on, which may have a chain of dependencies, or the package size of TensorFlow used for our experiments. Nevertheless we can reason that when the Neural Bloom Filter is 4kb smaller than the classical, such as for the non-uniform instance-based familiarity in FIG2, we would expect to see a net gain if we have a collection of at least 1, 950 data-structures. We imagine this could be optimized quite significantly, by using 16-bit precision and perhaps using more convolution layers or smaller feed-forward linear operations. For the database experiments we used an LSTM character encoder with 256 hidden units followed by another 256 feed-forward layer. The number of trainable parameters in the Neural Bloom Filter 419, 339 which amounts to 13Mb. One could imagine optimizing this by switching to a GRU or investigating temporal convolutions as encoders. We swept over the following hyper-parameters, over the range of memory sizes displayed for each task. We computed the best model parameters by selecting those which ed in a model consuming the least space as defined in Appendix B. This depends on model performance as well as state size. For the class-based familiarity task, and uniform sampling task, the model was trained on the training set and evaluated on the test set. For the class-based task sampling, a class is sampled at random and S is formed from a random subset of images from that class. The queries q are chosen uniformly from either S or from images of a different class. For the non-uniform instance-based familiarity task we sampled images from an exponential distribution. Specifically we used a fix permutation of the training images, and from that ordering chose p(i th image) ∝ 0.999 i for the images to store. The query images were selected uniformly. We used a fixed permutation (or shuffle) of the images to ensure most probability mass was not placed on images of a certain class. I.e. by the natural ordering of the dataset we would have otherwise almost always sampled 0 images. This would be confounding task non-uniformity for other latent structure to the sets. Because the network needed to relate the image to its frequency of occurence for task, the models were evaluated on the training set. This is reasonable as we are not wishing for the model to visually generalize to unseen elements in the setting of this exact-familiarity task. We specifically want the network weights to compress a map of image to probability of storage. For the database task a universe of 2.5M unique tokens were extracted from GigaWord v5. We shuffled the tokens and placed 2.3M in a training set and 250K in a test set. These sets were then sorted alphabetically. A random subset, representing an SSTable, was sampled by choosing a random start index and selecting the next n elements, which form our set S. Queries are sampled uniformly at random from the universe set. Models are trained on the training set and evaluated on the test set. We see the benefit of sphering in Figure 5 where the converged validation performance ends up at a higher state. Investigating the memory fill rate, we see this is because the model is ignoring many more rows than is necessary. This is likely due to the network fixating on rows it has accessed with sparse addressing, and ignoring rows it has otherwise never touched -a phenomena noted in BID31. The model finds a local minima in continually storing and accessing the same rows in memory. The effect of sphering is that the query now appears to be Gaussian (up to the first two moments) and so the nearest neighbour in the address matrix A (which is initialized to Gaussian random variables) will be close to uniform. This in a more uniform memory access (as seen in Figure 5b) which significantly aids performance (as seen in Figure 5a). Figure 5: The effect of sphering the query vector on performance in the uniform MNIST sampling task. We provide a brief comparison of run-time (in terms of latency and max-throughput) between a classical Bloom Filter and a Neural Bloom Filter. In several applications of Bloom Filters, the actual query latency is not in the critical path of computation. For example, for a distributed database, the network latency and disk access latency for one tablet can be orders of magnitude greater than the in-memory latency of a Bloom Filter query. For this reason, we have not made run-time a point of focus in this study, and it is implicitly assumed that the neural network is trading off greater latency for less space. However it is worth checking whether run-time could be prohibitive. Table 3: Latency for a single query, and throughput for a batch of 10,000 queries. *Bloom Filter benchmark taken from'query-efficient' Bloom Filter BID9.We use the Neural Bloom Filter network architecture for the large database task (Table 1). The network uses an encoder LSTM with 256 hidden units over the characters, and feeds this through a 256 fully connected layer to encode the input. A two-layer 256-hidden-unit MLP is used as the query architecture. The memory and word size is 8 and 4 respectively, and so the majority of the compute is spent in the encoder and query network. We compare this with an LSTM containing 32 hidden units. We benchmark the single-query latency of the network alongside the throughput of a batch of queries, and a batch of inserts. The Neural Bloom Filter and LSTM is implemented in TensorFlow without any custom kernels or specialized code. We benchmark it on the CPU (Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz) and a GPU (NVIDIA Quadro P6000). We compare to empirical timing published in a query-optimized Bloom Filter variant BID9.We see in Table 3. that the query and insert latency of the Neural Bloom Filter sits at 5ms on the CPU, around 400× slower than the classical Bloom Filter. The LSTM requires roughly the same latency. However when multiple queries are received, the operations batch well and the model is only around 20× slower for both the Neural Bloom Filter and LSTM -as these can be batched in parallel on the CPU. If we are able to use a GPU, the throughput further increases to roughly parity with the classical Bloom Filter. This is a very rough comparison, however it leads to the that a Neural Bloom Filter could be deployed in scenarios with high query load without a catastrophic decrease in throughput. For insertions we see the same insertion throughput of ≈ 58K is maintained for the Neural Bloom Filter on the GPU as the model uses batchable insertion operations that can be parallelized. The use of a purely additive write was primarily chosen to avoid optimizing the model with backpropagationthrough-time, however here we see it is beneficial for computational efficiency at evaluation time. However here we see a large difference between the NBF and the LSTM, which uses a strictlysequential write scheme. Even with a powerful NVIDIA P6000 GPU the LSTM's maximum insertion throughput is 4.6K, over an order of magnitude lower. Thus we conclude that even if we were able to train an LSTM to perform exact set membership on the larger database task to a good space performance -which we were not able to do due to the difficulty of training an RNN over rich sequences of length 5000 -it would serve insertions at over an order of magnitude slower than a Bloom Filter, even with accelerated hardware.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkekMnR5Ym
We investigate the space efficiency of memory-augmented neural nets when learning set membership.
We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture. Neural networks are most commonly trained in a maximum a posteriori (MAP) setting, which only yields point estimates of the parameters, ignoring any uncertainty about them. This often leads to overconfident predictions, especially in regimes that are weakly covered by training data or far away from the data manifold. While the confidence of wrong predictions is usually irrelevant in a research context, it is essential that a Machine Learning algorithm knows when it does not know in the real world, as the consequences of mistakes can be fatal, be it when driving a car or diagnosing a disease. The Bayesian framework of statistics provides a principled way for avoiding overconfidence in the parameters by treating them as unknown quantities and integrating over all possible values. Specifically, for the prediction of new data under a model, it fits a posterior distribution over the parameters given the training data and weighs the contribution of each setting of the parameters to the prediction by the probability of the data under those parameters times their prior probability. However, the posterior of neural networks is usually intractable due to their size and nonlinearity. There has been previous interest in integrating neural networks into the Bayesian framework BID26 BID15 BID28 BID1, however these approaches were designed for small networks by current standards. Recent adaptations to architectures of modern scale rely on crude approximations of the posterior to become tractable. All of BID9 BID14 BID2 assume independence between the individual weights. While they achieve good on small datasets, this strong restriction of the posterior is susceptible to underestimating the uncertainty, in particular when optimising the variational bound. The approach in BID6 requires the use of certain stochastic regularisers which are not commonly present in most recent architectures. Furthermore, it is not clear if the approximate posterior defined by these regularisers is a good fit to the true posterior. Recent work on second-order optimisation of neural networks BID27 BID3 has demonstrated that the diagonal blocks of the curvature can be well approximated by a Kronecker product. We combine this insight with the idea of modelling the posterior over the weights as a Gaussian, using a Laplace approximation BID26 with Kronecker factored covariance matrices. This leads to a computationally efficient matrix normal posterior distribution BID11 over the weights of every layer. Since the Laplace approximation is applied after training, our approach can be used to obtain uncertainty estimates from existing networks. Our method is inspired by recent Kronecker factored approximations of the curvature of a neural network BID27 BID3 for optimisation and we give a high-level review of these in the following. While the two methods approximate the Gauss-Newton and Fisher matrix respectively, as they are guaranteed to be positive semi-definite (p.s.d.), we base all of our discussion on the Hessian in order to be as general as possible. We denote a feedforward network as taking an input a 0 = x and producing an output h L. The intermediate representations for layers λ = 1,..., L are denoted as h λ = W λ a λ−1 and a λ = f λ (h λ). We refer to a λ as the activations, and h λ as the (linear) pre-activations. The bias terms are absorbed into the W λ by appending a 1 to each a λ. The network parameters are optimised w.r.t. an error function E(y, h L) for targets y. Most commonly used error functions, such as squared error and categorical cross-entropy, can be interpreted as exponential family negative log likelihoods − log p(y|h L). Traditional second-order methods use either the Hessian matrix or a positive semi-definite approximation thereof to generate parameter updates of the form ∆ = C −1 g, where C is the chosen curvature matrix and g the gradient of the error function parameterised by the network. However, this curvature matrix is infeasbile to compute for modern neural networks as their number of parameters is often in the millions, rendering the size of C of the order of several terabytes. Recent work BID27 BID3 exploits that, for a single data point, the diagonal blocks of these curvature matrices are Kronecker factored: DISPLAYFORM0 where H λ is the Hessian w.r.t. the weights in layer λ. Q λ = a λ−1 a T λ−1 denotes the covariance of the incoming activations a λ−1 and H λ = ∂ 2 E ∂h λ ∂h λ the pre-activation Hessian, i.e. the Hessian of the error w.r.t. the linear pre-activations h λ in a layer. We provide the derivation for this as well as the recursion for calculating H in Appendix A.The Kronecker factorisation holds two key advantages: the matrices that need be computed and stored are much smaller -if we assume all layers to be of dimensionality D, the two factors are each of size D 2, whereas the full Hessian for the weights of only one layer would have D 4 elements. Furthermore, the inverse of a Kronecker product is equal to the Kronecker product of the inverses, so it is only necessary to invert those two moderately sized matrices. In order to maintain this structure over a minibatch of data, all Kronecker factored second-order methods make two core approximations: First, they only model the diagonal blocks corresponding to the weights of a layer, such that the curvature decomposes into L independent matrices. Second, they assume Q λ and H λ to be independent. This is in order to maintain the Kronecker factorisation in expectation, i.e. DISPLAYFORM1, since the expectation of a Kronecker product is not guaranteed to be Kronecker factored itself. The main difference between the Kronecker factored second-order optimisers lies in how they efficiently approximate E [H λ]. For exact calculation, it would be necessary to pass back an entire matrix per data point in a minibatch, which imposes infeasible memory and computational requirements. KFRA BID3 simply passes back the expectation at every layer, while KFAC BID27 utilises the Fisher identity to only propagate a vector rather than a matrix, approximating the Kronecker factors with a stochastic rank-one matrix for each data point. The diagonal blocks of the Hessian and Gauss-Newton matrix are equal for neural networks with piecewise linear activation functions BID3, thus both methods can be used to directly approximate the diagonal blocks of the Hessian of such networks, as the Gauss-Newton and Fisher are equivalent for networks that parameterise an exponential family log likelihood.3 A SCALABLE LAPLACE APPROXIMATION FOR NEURAL NETWORKS The standard Laplace approximation is obtained by taking the second-order Taylor expansion around a mode of a distribution. For a neural network, such a mode can be found using standard gradientbased methods. Specifically, if we approximate the log posterior over the weights of a network given some data D around a MAP estimate θ *, we obtain: DISPLAYFORM0 where DISPLAYFORM1 is the stacked vector of weights andH = E [H] the average Hessian of the negative log posterior 1. The first order term is missing because we expand the function around a maximum θ *, where the gradient is zero. If we exponentiate this equation, it is easy to notice that the right-hand side is of Gaussian functional form for θ, thus we obtain a normal distribution by integrating over it. The posterior over the weights is then approximated as Gaussian: DISPLAYFORM2 assumingH is p.s.d. We can then approximate the posterior mean when predicting on unseen data D * by averaging the predictions of T Monte Carlo samples θ (t) from the approximate posterior: DISPLAYFORM3 Unfortunately, it is not feasible to compute or invert the Hessian matrix w.r.t. all of the weights jointly. An approximation that is easy to compute in modern automatic differentiation frameworks is the diagonal of the Fisher matrix F, which is simply the expectation of the squared gradients: DISPLAYFORM0 where diag extracts the diagonal of a matrix or turns a vector into a diagonal matrix. Such diagonal approximations to the curvature of a neural network have been used successfully for pruning the weights BID22 and, more recently, for transfer learning BID19.This corresponds to modelling the weights with a Normal distribution with diagonal covariance: DISPLAYFORM1 Unfortunately, even if the Taylor approximation is accurate, this will place significant probability mass in low probability areas of the true posterior if some weights exhibit high covariance. So while it is desirable to model the covariance between the weights, some approximations are needed in order to remain computationally efficient. First, we assume the weights of the different layers to be independent. This corresponds to the block-diagonal approximation in KFAC and KFRA, which empirically preserves sufficient information about the curvature to obtain competitive optimisation performance. For our purposes this means that our posterior factorises over the layers. As discussed above, the Hessian of the log-likelihood for a single datapoint is Kronecker factored, and we denote the two factor matrices as H λ = Q λ ⊗ H λ.2 By further assuming independence between Q and H in all layers, we can approximate the expected Hessian of each layer as: DISPLAYFORM0 Hence, the Hessian of every layer is Kronecker factored over an entire dataset and the Laplace approximation can be approximated by a product of Gaussians. Each Gaussian has a Kronecker factored covariance, corresponding to a matrix normal distribution BID11, which considers the two Kronecker factors of the covariance to be the covariances of the rows and columns of a matrix. The two factors are much smaller than the full covariance and allow for significantly more efficient inversion and sampling (we review the matrix normal distribution in Appendix B).Our ing posterior for the weights in layer λ is then: DISPLAYFORM1 In contrast to optimisation methods, we do not need to approximate E [H λ] as it is only calculated once. However, when it is possible to augment the data (e.g. randomised cropping of images), it may be advantageous. We provide a more detailed discussion of this in Appendix C. Just as the log posterior, the Hessian decomposes into a term depending on the data log likelihood and one on the prior. For the commonly used L 2 -regularisation, corresponding to a Gaussian prior, the Hessian is equal to the precision of the prior times the identity matrix. We approximate this by adding a multiple of the identity to each of the Kronecker factors from the log likelihood: DISPLAYFORM0 where τ is the precision of the Gaussian prior on the weights and N the size of the dataset. However, we can also treat them as hyperparameters and optimise them w.r.t. the predictive performance on a validation set. We emphasise that this can be done without retraining the network, so it does not impose a large computational overhead and is trivial to parallelise. Setting N to a larger value than the size of the dataset can be interpreted as including duplicates of the data points as pseudo-observations. Adding a multiple of the uncertainty to the precision matrix decreases the uncertainty about each parameter. This has a regularising effect both on our approximation to the true Laplace, which may be overestimating the variance in certain directions due to ignoring the covariances between the layers, as well as the Laplace approximation itself, which may be placing probability mass in low probability areas of the true posterior. Most recent attempts to approximating the posterior of a neural network are based on formulating an approximate distribution to the posterior and optimising the variational lower bound w.r.t. its parameters. BID9 BID2 BID18 as well as the expectation propagation based approaches of BID14 and BID7 assume independence between the individual weights which, particularly when optimising the KL divergence, often lets the model underestimate the uncertainty about the weights. BID6 interpret Dropout to approximate the posterior with a mixture of delta functions, assuming independence between the columns. BID21 suggest using an ensemble of networks for estimating the uncertainty. Our work is a scalable approximation of BID26. Since the per-layer Hessian of a neural network is infeasible to compute, we suggest a factorisation of the covariance into a Kronecker product, leading to a more efficient matrix normal distribution. The posterior that we obtain is reminiscent of BID24 and BID30, who optimise the parameters of a matrix normal distribution as their weights, which requires a modification of the training procedure. Since the Laplace approximation is a method for predicting in a Bayesian manner and not for training, we focus on comparing to uncertainty estimates obtained from Dropout BID6. The trained networks will be identical, but the prediction methods will differ. We also compare to a diagonal Laplace approximation to highlight the benefit from modelling the covariances between the weights. All experiments are implemented using Theano BID32 and BID4. For the diagonal and full Laplace approximation we use the Fisher identity and draw one sample per data point. We set the hyperparameters of the Laplace approximations (see Section 3.4) using a grid search over the likelihood of 20 validation points that are sampled the same way as the training set. The regularised Laplace approximations all give an overall good fit to the HMC predictive posterior. Their uncertainty is slightly higher close to the training data and increases more slowly away from the data than that of the HMC posterior. The diagonal and full Laplace approximation require stronger regularisation than our Kronecker factored one, as they have higher uncertainty when not regularised. In particular the full Laplace approximation vastly overestimates the uncertainty without additional regularisation, leading to a bad predictive mean (see Appendix E for the corresponding figures), as the Hessian of the log likelihood is underdetermined. This is commonly the case in deep learning, as the number of parameters is typically much larger than the number of data points. Hence restricting the structure of the covariance is not only a computational necessity for most architectures, but also allows for more precise estimation of the approximate covariance. For a more realistic test, similar to BID25, we assess the uncertainty of the predictions when classifying data from a different distribution than the training data. For this we train a network with two layers of 1024 hidden units and ReLU transfer functions to classify MNIST digits. We use a learning rate of 10 −2 and momentum of 0.9 for 250 epochs. We apply Dropout with p=0.5 after each inner layer, as our chief interest is to compare against its uncertainty estimates. We further use L 2 -regularisation with a factor of 10 −2 and randomly binarise the images during training according to their pixel intensities and draw 1, 000 such samples per datapoint for estimating the curvature factors. We use this network to classify the images in the notMNIST dataset 4, which contains 28×28 grey-scale images of the letters'A' to'J' from various computer fonts, i.e. not digits. An ideal classifier would make uniform predictions over its classes. We compare the uncertainty obtained by predicting the digit class of the notMNIST images using 1. a deterministic forward pass through the Dropout trained network, 2. by sampling different Dropout masks and averaging the predictions, and by sampling different weight matrices from 3. the matrix normal distribution obtained from our Kronecker factored Laplace approximation as well as 4. the diagonal one. As an additional baseline similar to BID2 BID9, we compare to a network with identical architecture with a fully factorised Gaussian (FFG) approximate posterior on the weights and a standard normal prior. We train the model on the variational lower bound using the reparametrisation trick BID17. We use 100 samples for the stochastic forward passes and optimise the hyperparameters of the Laplace approximations w.r.t. the cross-entropy on the validation set of MNIST.We measure the uncertainty of the different methods as the entropy of the predictive distribution, which has a minimal value of 0 when a single class is predicted with certainty and a maximum of about 2.3 for uniform predictions. FIG2 shows the inverse empirical cumulative distribution of the entropy values obtained from the four methods. Consistent with the in BID6, averaging the probabilities of multiple passes through the network yields predictions with higher uncertainty than a deterministic pass that approximates the geometric average BID29. However, there still are some images that are predicted to be a digit with certainty. Our Kronecker factored Laplace approximation makes hardly any predictions with absolute certainty and assigns high uncertainty to most of the letters as desired. The diagonal Laplace approximation required stronger regularisation towards predicting deterministically, yet it performs similarly to Dropout. As shown in TAB0, however, the network makes predictions on the test set of MNIST with similar accuracy to the deterministic forward pass and MC Dropout when using our approximation. The variational factorised Gaussian posterior has low uncertainty as expected. To further test the robustness of our prediction method close to the data distribution, we perform an adversarial attack on a neural network. As first demonstrated in BID31, neural networks are prone to being fooled by gradient-based changes to their inputs. BID23 suggest, and provide empirical support, that Bayesian models may be more robust to such attacks, since they implicitly form an infinitely large ensemble by integrating over the model parameters. For our experiments, we use the fully connected net trained on MNIST from the previous section and compare the sensitivity of the different prediction methods for two kinds of adversarial attacks. First, we use the untargeted Fast Gradient Sign method x adv = x − η sgn(∇ x max y log p (M) (y|x)) suggested in BID8, which takes the gradient of the class predicted with maximal probability by method M w.r.t. the input x and reduces this probability with varying step size η. This step size is rescaled by the difference between the maximal and minimal value per dimension in the dataset. It is to be expected that this method generates examples away from the data manifold, as there is no clear subset of the data that corresponds to e.g. "not ones". FIG3 shows the average predictive uncertainty and the accuracy on the original class on the MNIST test set as the step size η increases. The Kronecker factored Laplace approximation achieves significantly higher uncertainty than any other prediction method as the images move away from the data. Both the diagonal and the Kronecker factored Laplace maintain higher accuracy than MC Dropout on their original predictions. Interestingly, the deterministic forward pass appears to be most robust in terms of accuracy, however it has much smaller uncertainty on the predictions it makes and will confidently predict a false class for most images, whereas the other methods are more uncertain. Furthermore, we perform a targeted attack that attempts to force the network to predict a specific class, in our case'0' following BID23. Hence, for each method, we exclude all data points in the test set that are already predicted as'0'. The updates are of similar form to the untargeted attack, however they increase the probability of the pre-specified class y rather than decreasing the current maximum as x DISPLAYFORM0 We use a step size of η=10 −2 for the targeted attack. The uncertainty and accuracy on the original and target class are shown in FIG4. Here, the Kronecker factored Laplace approximation has slightly smaller uncertainty at its peak in comparison to the other methods, however it appears to be much more robust. It only misclassifies over 50% of the images after about 20 steps, whereas for the other methods this is the case after roughly 10 steps and reaches 100% accuracy on the target class after almost 50 updates, whereas the other methods are fooled on all images after about 25 steps. In conjunction with the experiment on notMNIST, it appears that the Laplace approximation achieves higher uncertainty than Dropout away from the data, as in the untargeted attack. In the targeted attack it exhibits smaller uncertainty than Dropout, yet it is more robust to having its prediction changed. The diagonal Laplace approximation again performs similarly to Dropout. To highlight the scalability of our method, we apply it to a state-of-the-art convolutional network architecture. Recently, deep residual networks BID12 b) have been the most successful ones among those. As demonstrated in BID10, Kronecker factored curvature methods are applicable to convolutional layers by interpreting them as matrix-matrix multiplications. We compare our uncertainty estimates on wide residual networks BID33, a recent variation that achieved competitive performance on CIFAR100 BID20 while, in contrast to most other residual architectures, including Dropout at specific points. While this does not correspond to using Dropout in the Bayesian sense BID5, it allows us to at least compare our method to the uncertainty estimates obtained from Dropout. We note that it is straightforward to incorporate batch normalisation BID16 into the curvature backpropagation algorithms, so we apply a standard Laplace approximation to its parameters as well. We are not aware of any interpretation of Dropout as performing Bayesian inference on the parameters of batch normalisation. Further implementation details are in Appendix G.Again, the accuracy of the prediction methods is comparable (see TAB1 in Appendix F). For calculating the curvature factors, we draw 5, 000 samples per image using the same data augmentation as during training, effectively increasing the dataset size to 2.5×10 8. The diagonal approximation had to be regularised to the extent of becoming deterministic, so we omit it from the . In FIG5 we compare the distribution of the predictive uncertainty on the test set. 5 We distinguish between the uncertainty on correct and incorrect classifications, as the mistakes of a system used in practice may be less severe if the network can at least indicate that it is uncertain. Thus, high uncertainty on misclassifications and low uncertainty on correct ones would be desirable, such that a system could return control to a human expert when it can not make a confident decision. In general, the network tends to be more uncertain on its misclassifcations than its correct ones regardless of whether it was trained with or without Dropout and of the method used for prediction. Both Dropout and the Laplace approximation similarly increase the uncertainty in the predictions, however this is irrespective of the correctness of the classification. Yet, our experiments show that the Kronecker factored Laplace approximation can be scaled to modern convolutional networks and maintain good classification accuracy while having similar uncertainty about the predictions as Dropout. We had to use much stronger regularisation for the Laplace approximation on the wide residual network, possibly because the block-diagonal approximation becomes more inaccurate on deep networks, possibly because the number of parameters is much higher relative to the number of data. It would be interesting to see how the Laplace approximations behaves on a much larger dataset like ImageNet for similarly sized networks, where we have a better ratio of data to parameters and curvature directions. However, even on a relatively small dataset like CIFAR we did not have to regularise the Laplace approximation to the degree of the posterior becoming deterministic. We presented a scalable approximation to the Laplace approximation for the posterior of a neural network and provided experimental suggesting that the uncertainty estimates are on par with current alternatives like Dropout, if not better. It enables practitioners to obtain principled uncertainty estimates from their models, even if they were trained in a maximum likelihood/MAP setting. There are many possible extensions to this work. One would be to automatically determine the scale and regularisation hyperparameters of the Kronecker factored Laplace approximation using the model evidence similar to how BID26 interpolates between the data log likelihood and the width of the prior. The model evidence could further be used to perform Bayesian model averaging on ensembles of neural networks, potentially improving their generalisation ability and uncertainty estimates. A challenging application would be active learning, where only little data is available relative to the number of curvature directions that need to be estimated. Here, we provide the basic derivation of the factorisation of the diagonal blocks of the Hessian in Eq. 1 and the recursive formula for calculating H as presented in BID3.The Hessian of a neural network with parameters θ as defined in the main text has elements: DISPLAYFORM0 For a given layer λ, the gradient w.r.t. a weight W λ a,b is: DISPLAYFORM1 Keeping λ fixed and differentiating again, we find that the per-sample Hessian of that layer is: DISPLAYFORM2 where DISPLAYFORM3 is the pre-activation Hessian. We can reexpress this in matrix notation as a Kronecker product as in Eq. 1: DISPLAYFORM4 The pre-activation Hessian can be calculated recursively as: DISPLAYFORM5 where the diagonal matrices B and D are defined as: DISPLAYFORM6 DISPLAYFORM7 f and f denote the first and second derivative of the transfer function. The recursion is initialised with the Hessian of the error w.r.t. the linear network outputs. For further details and on how to calculate the diagonal blocks of the Gauss-Newton and Fisher matrix, we refer the reader to BID3 and BID27. The matrix normal distribution BID11 ) is a multivariate distribution over an entire matrix of shape n × p rather than just a vector. In contrast to the multivariate normal distribution, it is parameterised by two p.s.d. covariance matrices, U: n × n and V: p × p, which indicate the covariance of the rows and columns respectively. In addition it has a mean matrix M: n × p. A vectorised sample from a matrix normal distribution X ∼ MN (M, U, V) corresponds to a sample from a normal distribution vec(X) ∼ N (vec(M), U ⊗ V ). However, samples can be drawn more efficiently as X = M + AZB with Z ∼ MN (0, I, I), and AA T = U and B T B = V. The sample Z corresponds to a sample from a normal distribution of length np that has been reshaped to a n × p matrix. This is more efficient in the sense that we only need to calculate two matrix-matrix products of small matrices, rather than a matrix-vector product with one big one. While the square root of Q λ is calculated during the forward pass on all layers, H requires an additional backward pass. Strictly speaking, it is not essential to approximate E [H] for the Kronecker factored Laplace approximation, as in contrast to optimisation procedures the curvature only needs to be calculated once and is thus not time critical. For datasets of the scale of ImageNet and the networks used for such datasets, it would still be impractically slow to perform the calculation for every data point individually. Furthermore, as most datasets are augmented during training, e.g. random cropping or reflections of images, the curvature of the network can be estimated using the same augmentations, effectively increasing the size of the dataset by orders of magnitude. Thus, we make use of the minibatch approximation in our experiments -as we make use of data augmentationin order to demonstrate its practical applicability. We note that E [H] can be calculated exactly by running KFRA BID3 with a minibatchsize of one, and then averaging the . KFAC BID27, in contrast, stochastically approximates the Fisher matrix, so even when run for every datapoint separately, it cannot calculate the curvature factor exactly. In the following, we also show figures for the adversarial experiments in which we calculate the curvature per datapoint and without data augmentation: Figure 6: Untargeted adversarial attack for Kronecker factored Laplace approximation with the curvature calculated with and without data augmentation/approximating the activation Hessian. Fig. 6 and FIG6 show how the Laplace approximation with the curvature estimated from 1000 randomly sampled binary MNIST images and the activation Hessian calculated with a minibatch size of 100 performs in comparison to the curvature factor being calculated without any data augmentation with a batch size of 100 or exactly. We note that without data augmentation we had to use much stronger regularisation of the curvature factors, in particular we had to add a non-negligible multiple of the identity to the factors, whereas with data augmentation it was only needed to ensure that the matrices are invertible. The Kronecker factored Laplace approximation reaches particularly high uncertainty on the untargeted adversarial attack and is most robust on the targeted attack when using data augmentation, suggesting that it is particularly well suited for large datasets and ones where some form of data augmentation can be applied. The difference between approximating the activation Hessian over a minibatch and calculating it exactly appears to be negligible. If we denote the dimensionality of the input to layer λ as D λ−1 and its output as D λ, the curvature factors correspond to the two precision matrices with DISPLAYFORM0'parameters' to estimate, since they are symmetric. So across a network, the number of curvature directions that we are estimating grows linearly in the number of layers and quadratically in the dimension of the layers, i.e. the number of columns of the weight matrices. The size of the full Hessian, on the other hand, grows quadratically in the number of layers and with the fourth power in the dimensionality of the layers (assuming they are all the same size).Once the curvature factors are calculated, which only needs to be done once, we use their Cholesky decomposition to solve two triangular linear systems when sampling weights from the matrix normal distribution. We use the same weight samples for each minibatch, i.e. we do not sample a weight matrix per datapoint. This is for computational efficiency and does not change the expectation. One possibility to save computation time would be to sample a fixed set of weight matrices from the approximate posterior -in order to avoid solving the linear system on every forward pass -and treat the networks that they define as an ensemble. The individual ensemble members can be evaluated in parallel and their outputs averaged, which can be done with a small overhead over evaluating a single network given sufficient compute resources. A further speed up can be achieved by distilling the predictive distributions of the Laplace network into a smaller, deterministic feedforward network as successfully demonstrated in BID0 for posterior samples using HMC.E COMPLEMENTARY FIGURES FOR THE TOY DATASET FIG7 shows the different Laplace approximations (Kronecker factored, diagonal, full) from the main text without any hyperparameter tuning. The figure of the uncertainty obtained from samples using HMC is repeated. Note that the scale is larger than in the main text due to the high uncertainty of the Laplace approximations. The Laplace approximations are increasingly uncertain away from the data, as the true posterior estimated from HMC samples, however they all overestimate the uncertainty without regularisation. This is easy to fix by optimising the hyperparameters on a validation set as discussed in the main text, ing in posterior uncertainty much more similar to the true posterior. As previously discussed in BID3, the Hessian of a neural network is usually underdetermined as the number of data points is much smaller than the number of parameters -in our case we have 20 data points to estimate a 78×78 precision matrix. This leads to the full Laplace approximation vastly overestimating the uncertainty and a bad predictive mean. Both the Kronecker factored and the diagonal approximation exhibit smaller variance than the full Laplace approximation as they restrict the structure of the precision matrix. Consistently with the other experiments, we find the diagonal Laplace approximation to place more mass in low probability areas of the posterior than the Kronecker factored approximation, ing in higher variance on the regression problem. This leads to a need for greater regularisation of the diagonal approximation to obtain acceptable predictive performance, and underestimating the uncertainty. This section shows the accuracy values obtained from the different predictions methods on the feedforward networks for MNIST and the wide residual network for CIFAR100. The for MNIST are shown in TAB0 and the for CIFAR in TAB1.In all cases, neither MC Dropout nor the Laplace approximation significantly change the classification accuracy of the network in comparison to a deterministic forward pass. Our wide residual network has n=3 block repetitions and a width factor of k=8 on CIFAR100 with and without Dropout using hyperparameters taken from BID33: the network parameters are trained on a cross-entropy loss using Nesterov momentum with an initial learning rate of 0.1 and momentum of 0.9 for 200 epochs with a minibatch size of 128. We decay the learning rate every 50 epochs by a factor of 0.2, which is slightly different to the schedule used in BID33 ) (they decay after 60, 120 and 160 epochs). As the original authors, we use L 2 -regularisation with a factor of 5×10 −4.We make one small modification to the architecture: instead of downsampling with 1×1 convolutions with stride 2, we use 2×2 convolutions. This is due to Theano not supporting the transformation of images into the patches extracted by a convolution for 1×1 convolutions with stride greater than 1, which we require for our curvature backpropagation through convolutions. We apply a standard Laplace approximation to the batch normalisation parameters -a Kronecker factorisation is not needed, since the parameters are one-dimensional. When calculating the curvature factors, we use the moving averages for the per-layer means and standard deviations obtained after training, in order to maintain independence between the data points in a minibatch. We need to make a further approximation to the ones discussed in Section 2.2 when backpropagating the curvature for residual networks. The residual blocks compute a function of the form res(x) = x + f φ (x), where f φ typically is a sequence of convolutions, batch normalisation and elementwise nonlinearities. This means that we would need to pass back two curvature matrices, one for each summand. However, this would double the number of backpropagated matrices for each residual connection, hence the computation time/memory requirements would grow exponentially in the number of residual blocks. Therefore, we simply add the curvature matrices after each residual connection.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Skdvd2xAZ
We construct a Kronecker factored Laplace approximation for neural networks that leads to an efficient matrix normal distribution over the weights.
Spectral embedding is a popular technique for the representation of graph data. Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering. In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix. Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers. We illustrate these on both on both synthetic and real data, showing how regularization improves standard clustering scores. Spectral embedding is a standard technique for the representation of graph data . Given the adjacency matrix A ∈ R n×n + of the graph, it is obtained by solving either the eigenvalue problem: or the generalized eigenvalue problem: where D = diag(A1 n) is the degree matrix, with 1 n the all-ones vector of dimension n, L = D − A is the Laplacian matrix of the graph, Λ ∈ R k×k is the diagonal matrix of the k smallest (generalized) eigenvalues of L and X ∈ R n×k is the corresponding matrix of (generalized) eigenvectors. In this paper, we only consider the generalized eigenvalue problem, whose solution is given by the spectral decomposition of the normalized Laplacian matrix L norm = I − D −1/2 AD −1/2 . The spectral embedding can be interpreted as equilibrium states of some physical systems (; ;), a desirable property in modern machine learning. However, it tends to produce poor on real datasets if applied directly on the graph . One reason is that real graphs are most often disconnected due to noise or outliers in the dataset. In order to improve the quality of the embedding, two main types of regularization have been proposed. The first artificially increases the degree of each node by a constant factor , while the second adds a constant to all entries of the original adjacency matrix (; ;). In the practically interesting case where the original adjacency matrix A is sparse, the regularized adjacency matrix is dense but has a so-called sparse + low rank structure, enabling the computation of the spectral embedding on very large graphs . While explains the effects of regularization through graph conductance and through eigenvector perturbation on the Stochastic Block Model, there is no simple interpretation of the benefits of graph regularization. In this paper, we show on a simple block model that the complete graph regularization forces the spectral embedding to separate the blocks in decreasing order of size, making the embedding less sensitive to noise or outliers in the data. Indeed, identified that, without regularization, the cuts corresponding to the first dimensions of the spectral embedding tend to separate small sets of nodes, so-called dangling sets, loosely connected to the rest of the graph. Our work shows more explicitly that regularization forces the spectral embedding to focus on the largest clusters. Moreover, our analysis involves some explicit characterization of the eigenvalues, allowing us to quantify the impact of the regularization parameter. The rest of this paper is organized as follows. Section 2 presents block models and an important preliminary about their aggregation. Section 3 presents the main of the paper, about the regularization of block models, while Section 4 extends this to bipartite graphs. Section 5 presents the experiments and Section 6 concludes the paper. Let A ∈ R n×n + be the adjacency matrix of an undirected, weight graph, that is a symmetric matrix such that A ij > 0 if and only if there is an edge between nodes i and j, with weight A ij. Assume that the n nodes of the graph can be partitioned into K blocks of respective sizes n 1,..., n K so that any two nodes of the same block have the same neighborhood, i.e., the corresponding rows (or columns) of A are the same. Without any loss of generality, we assume that the matrix A has rank K. We refer to such a graph as a block model. Let Z ∈ R n×K be the associated membership matrix, with Z ij = 1 if index i belongs to block j and 0 otherwise. We denote by W = Z T Z ∈ R K×K the diagonal matrix of block sizes.. This is the adjacency matrix of the aggregate graph, where each block of the initial graph is replaced by a single node; two nodes in this graph are connected by an edge of weight equal to the total weight of edges between the corresponding blocks in the original graph. We denote byD = diag(Ā1 K) the degree matrix and byL =D −Ā the Laplacian matrix of the aggregate graph. The following shows that the solution to the generalized eigenvalue problem follows from that of the aggregate graph: Proposition 1. Let x be a solution to the generalized eigenvalue problem: Then either Z T x = 0 and λ = 1 or x = Zy where y is a solution to the generalized eigenvalue problem:L y = λDy. Proof. Consider the following reformulation of the generalized eigenvalue problem: Since the rank of A is equal to K, there are n − K eigenvectors x associated with the eigenvalue λ = 1, each satisfying Z T x = 0. By orthogonality, the other eigenvectors satisfy x = Zy for some vector y ∈ R K. We get: Thus y is a solution to the generalized eigenvalue problem. Let A be the adjacency matrix of some undirected graph. We consider a regularized version of the graph where an edge of weight α is added between all pairs of nodes, for some constant α > 0. The corresponding adjacency matrix is given by: where J = 1 n 1 T n is the all-ones matrix of same dimension as A. We denote by D α = diag(A α 1 n) the corresponding degree matrix and by L α = D α − A α the Laplacian matrix. We first consider a simple block model where the graph consists of K disjoint cliques of respective sizes n 1 > n 2 > · · · > n K nodes, with n K ≥ 1. In this case, we have A = ZZ T, where Z is the membership matrix. The objective of this section is to demonstrate that, in this setting, the k-th dimension of the spectral embedding isolates the k − 1 largest cliques from the rest of the graph, for any k ∈ {2, . . ., K} Lemma 1. Let λ 1 ≤ λ 2 ≤... ≤ λ n be the eigenvalues associated with the generalized eigenvalue problem: Proof. Since the Laplacian matrix L α is positive semi-definite, all eigenvalues are non-negative . We know that the eigenvalue 0 has multiplicity 1 on observing that the regularized graph is connected. Now for any vector x, so that the matrix A α is positive semi-definite. In view of, this shows that λ ≤ 1 for any eigenvalue λ. The proof then follows from Proposition 1, on observing that the eigenvalue 1 has multiplicity n − K. Lemma 2. Let x be a solution to the generalized eigenvalue problem with λ ∈. There exists some s ∈ {+1, −1} such that for each node i in block j, Proof. In view of Proposition 1, we have x = Zy where y is a solution to the generalized eigenvalue problem of the aggregate graph, with adjacency matrix: where I K is the identity matrix of dimension K × K. We deduce the degree matrix: and the Laplacian matrix:L The generalized eigenvalue problem associated with the aggregate graph is: After multiplication by W −1, we get: Observing that and since W = diag(n 1, . . ., n K), The then follows from the fact that x = Zy. Lemma 3. The K smallest eigenvalues satisfy: where for all j = 1,..., K, µ j = αn αn + n j. Proof. We know from Lemma 1 that the K smallest eigenvalues are in. Let x be a solution to the generalized eigenvalue problem with λ ∈. We know that x = Zy where y is an eigenvector associated with the same eigenvalue λ for the aggregate graph. Since 1 K is an eigenvector for the eigenvalue 0, we have y We then deduce from and This condition cannot be satisfied if λ < µ 1 or λ > µ K as the terms of the sum would be either all positive or all negative. Now let y be another eigenvector for the aggregate graph, with y TD α y = 0, for the eigenvalue λ ∈. By the same argument, we get: with λ ∈ {µ 1, . . ., µ K}. This condition cannot be satisfied if λ and λ are in the same interval (µ j, µ j+1) for some j as the terms in the sum would be all positive. There are K − 1 eigenvalues in for K − 1 such intervals, that is one eigenvalue per interval. The main of the paper is the following, showing that the k − 1 largest cliques of the original graph can be recovered from the spectral embedding of the regularized graph in dimension k. Theorem 1. Let X be the spectral embedding of dimension k, as defined by, for some k in the set {2, . . ., K}. Then sign(X) gives the k − 1 largest blocks of the graph. Proof. Let x be the j-th column of the matrix X, for some j ∈ {2, . . ., k}. In view of Lemma 3, this is the eigenvector associated with eigenvalue λ j ∈ (µ j−1, µ j), so that In view of Lemma 2, all entries of x corresponding to blocks of size n 1, n 2..., n j−1 have the same sign, the other having the opposite sign. Theorem 1 can be extended in several ways. First, the assumption of distinct block sizes can easily be relaxed. If there are L distinct values of block sizes, say m 1,..., m L blocks of sizes n 1 >... > n L, there are L distinct values for the thresholds µ j and thus L distinct values for the eigenvalues λ j in, the multiplicity of the j-th smallest eigenvalue being equal to m j. The spectral embedding in dimension k still gives k − 1 cliques of the largest sizes. Second, the graph may have edges between blocks. Taking A = ZZ T + εJ for instance, for some parameter ε ≥ 0, the are exactly the same, with α replaced by +α. A key observation is that regularization really matters when ε → 0, in which case the initial graph becomes disconnected and, in the absence of regularization, the spectral embedding may isolate small connected components of the graph. In particular, the regularization makes the spectral embedding much less sensitive to noise, as will be demonstrated in the experiments. Finally, degree correction can be added by varying the node degrees within blocks. Taking A = θZZ T θ, for some arbitrary diagonal matrix θ with positive entries, similar can be obtained under the regularization A α = A + αθJθ. Interestingly, the spectral embedding in dimension k then recovers the k − 1 largest blocks in terms of normalized weight, the ratio of the total weight of the block to the number of nodes in the block. Let B = R n×m + be the biadjacency matrix of some bipartite graph with respectively n, m nodes in each part, i.e., B ij > 0 if and only if there is an edge between node i in the first part of the graph and node j in the second part of the graph, with weight B ij. This is an undirected graph of n + m nodes with adjacency matrix: The spectral embedding of the graph can be written in terms of the biadjacency matrix as follows: where X 1, X 2 are the embeddings of each part of the graph, with respective dimensions n × k and In particular, the spectral embedding of the graph follows from the generalized SVD of the biadjacency matrix B. The complete regularization adds edges between all pairs of nodes, breaking the bipartite structure of the graph. Another approach consists in applying the regularization to the biadjacency matrix, i.e., in considering the regularized bipartite graph with biadjacency matrix: B α = B + αJ, where J = 1 n 1 T m is here the all-ones matrix of same dimension as B. The spectral embedding of the regularized graph is that associated with the adjacency matrix: As in Section 3, we consider a block model so that the biadjacency matrix B is block-diagonal with all-ones block matrices on the diagonal. Each part of the graph consists of K groups of nodes of respective sizes n 1 >... > n K and m 1 >... > m K, with nodes of block j in the first part connected only to nodes of block j in the second part, for all j = 1,..., K. We consider the generalized eigenvalue problem associated with the above matrix A α. In view of, this is equivalent to the generalized SVD of the regularized biadjacency matrix B α. We have the following , whose proofs are deferred to the appendix: Lemma 4. Let λ 1 ≤ λ 2 ≤... ≤ λ n be the eigenvalues associated with the generalized eigenvalue problem. We have λ 1 = 0 < λ 2 ≤... ≤ λ K < λ K+1 =... = λ n−2K <... < λ n = 2. Lemma 5. Let x be a solution to the generalized eigenvalue problem with λ ∈. There exists s 1, s 2 ∈ {+1, −1} such that for each node i in block j of part p ∈ {1, 2}, Lemma 6. The K smallest eigenvalues satisfy: Published as a conference paper at ICLR 2020 Theorem 2. Let X be the spectral embedding of dimension k, as defined by, for some k in the set {2, . . ., K}. Then sign(X) gives the k − 1 largest blocks of each part of the graph. Like Theorem 1, the assumption of decreasing block sizes can easily be relaxed. Assume that block pairs are indexed in decreasing order of µ j. Then the spectral embedding of dimension k gives the k − 1 first block pairs for that order. It is interesting to notice that the order now depends on α: when α → 0 +, the block pairs j of highest value (n nj + m mj) −1 (equivalently, highest harmonic mean of proportions of nodes in each part of the graph) are isolated first; when α → +∞, the block pairs j of highest value nj mj nm (equivalently, the highest geometric mean of proportions of nodes in each part of the graph) are isolated first. The also extend to non-block diagonal biadjacency matrices B and degree-corrected models, as for Theorem 1. We now illustrate the impact of regularization on the quality of spectral embedding. We focus on a clustering task, using both synthetic and real datasets where the ground-truth clusters are known. In all experiments, we skip the first dimension of the spectral embedding as it is not informative (the corresponding eigenvector is the all-ones vector, up to some multiplicative constant). The code to reproduce these experiments is available online 1. We first illustrate the theoretical of the paper with a toy graph consisting of 3 cliques of respective sizes 5, 3, 2. We compute the spectral embeddings in dimension 1, using the second smallest eigenvalue. Denoting by Z the membership matrix, we get X ≈ Z(−0.08, 0.11, 0.05) T for α = 1, showing that the embedding isolates the largest cluster; this is not the case in the absence of regularization, where X ≈ Z(0.1, −0.1, 0.41) T. This section describes the datasets used in our experiments. All graphs are considered as undirected. Table 1 presents the main features of the graphs. Stochastic Block-Model (SBM) We generate 100 instances of the same stochastic block model . There are 100 blocks of size 20, with intra-block edge probability set to 0.5 for the first 50 blocks and 0.05 for the other blocks. The inter-block edge probability is set to 0.001 Other sets of parameters can be tested using the code available online. The ground-truth cluster of each node corresponds to its block. This dataset consists of around 18000 newsgroups posts on 20 topics. This defines a weighted bipartite graph between documents and words. The label of each document corresponds to the topic. . This is the graph of hyperlinks between a subset of Wikipedia pages. The label of each page is its category (e.g., countries, mammals, physics). We consider a large set of metrics from the clustering literature. All metrics are upper-bounded by 1 and the higher the score the better. Homogeneity (H), Completeness (C) and V-measure score (V) . Supervised metrics. A cluster is homogeneous if all its data points are members of a single class in the ground truth. A clustering is complete if all the members of a class in the ground truth belong to the same cluster in the prediction. Harmonic mean of homogeneity and completeness. Adjusted Rand Index (ARI) . Supervised metric. This is the corrected for chance version of the Rand Index which is itself an accuracy on pairs of samples. Adjusted Mutual Information (AMI) Supervised metric. Adjusted for chance version of the mutual information. Fowlkes-Mallows Index (FMI) . Supervised metric. Geometric mean between precision and recall on the edge classification task, as described for the ARI. Modularity (Q) . Unsupervised metric. Fraction of edges within clusters compared to that is some null model where edges are shuffled at random. Normalized Standard Deviation (NSD) Unsupervised metric. 1 minus normalized standard deviation in cluster size. All graphs are embedded in dimension 20, with different regularization parameters. To compare the impact of this parameter across different datasets, we use a relative regularization parameter (w/n 2)α, where w = 1 T n A1 n is the total weight of the graph. We use the K-Means algorithm with to cluster the nodes in the embedding space. The parameter K is set to the ground-truth number of clusters (other experiments with different values of K are reported in the Appendix). We use the Scikit-learn implementation of KMeans and the metrics, when available. The spectral embedding and the modularity are computed with the Scikit-network package, see the documentation for more details 2. We report the in Table 2 for relative regularization parameter α = 0, 0.1, 1, 10. We see that the regularization generally improves performance, the optimal value of α depending on both the dataset and the score function. As suggested by Lemma 3, the optimal value of the regularization parameter should depend on the distribution of cluster sizes, on which we do not have any prior knowledge. To test the impact of noise on the spectral embedding, we add isolated nodes with self loop to the graph and compare the clustering performance with and without regularization. The number of isolated nodes is given as a fraction of the initial number of nodes in the graph. Scores are computed only on the initial nodes. The are reported in Table 3 for the Wikipedia for Schools dataset. We observe that, in the absence of regularization, the scores drop even with only 1% noise. The computed clustering is a trivial partition with all initial nodes in the same cluster. This means that the 20 first dimensions of the spectral embedding focus on the isolated nodes. On the other hand, the scores remain approximately constant in the regularized case, which suggests that regularization makes the embedding robust to this type of noise. In this paper, we have provided a simple explanation for the well-known benefits of regularization on spectral embedding. Specifically, regularization forces the embedding to focus on the largest clusters, making the embedding more robust to noise. This was obtained through the explicit characterization of the embedding for a simple block model, and extended to bipartite graphs. An interesting perspective of our work is the extension to stochastic block models, using for instance the concentration proved in . Another problem of interest is the impact of regularization on other downstream tasks, like link prediction. Finally, we would like to further explore the impact of the regularization parameter, exploiting the theoretical presented in this paper. We provide of proof of Theorem 2 as well as a complete set of experimental . The proof of Theorem 2 follows the same workflow as that of Theorem 1. Let Z 1 ∈ R n×K and Z 2 ∈ R m×K be the left and right membership matrices for the block matrix B ∈ R n×m. The aggregated matrix isB = Z T 1 BZ 2 ∈ R K×K. The diagonal matrices of block sizes are We have the equivalent of Proposition 1: Proposition 2. Let x 1, x 2 be a solution to the generalized singular value problem: x 2 = 0 and σ = 0 or x 1 = Z 1 y 1 and x 2 = Z 2 y 2 where y 1, y 2 is a solution to the generalized singular value problem: B y 2 = σD 1 y 1, B T y 1 = σD 2 y 2. Proof. Since the rank of B is equal to K, there are n−K pairs of singular vectors (x 1, x 2) associated with the singular values 0, each satisfying Z T 1 x 1 = 0 and Z T 2 x 2 = 0. By orthogonality, the other pairs of singular vectors satisfy x 1 = Z 1 y 1 and x 2 = Z 2 y 2 for some vectors y 1, y 2 ∈ R K. By replacing these in the original generalized singular value problem, we get that (y 1, y 2) is a solution to the generalized singular value problem for the aggregate graph. In the following, we focus on the block model described in Section 4, where Proof of Lemma 4. The generalized eigenvalue problem associated with the regularized matrix A α is equivalent to the generalized SVD of the regularized biadjacency matrix B α: In view of Proposition 2, the singular value σ = 0 has multiplicity n − K, meaning that the eigenvalue λ = 1 has multiplicity n − K. Since the graph is connected, the eigenvalue 0 has multiplicity 1. The proof then follows from the observation that if (x 1, x 2) is a pair of singular vectors for the singular value σ, then the vectors x = (x 1, ±x 2) T are eigenvectors for the eigenvalues 1 − σ, 1 + σ. Proof of Lemma 5. By Proposition 2, we can focus on the generalized singular value problem for the aggregate graph: we have: Observing that J K W 1 y 1 ∝ 1 K and J K W 2 y 2 ∝ 1 K, we get: As two diagonal matrices commute, we obtain: for some constants η 1, η 2, and Letting s 1 = −sign(η 1 (n j + αn) + η 2 m j ) and s 2 = −sign(η 1 n j + η 2 (m j + αm)), we get: and the follows from the fact that x 1 = Z 1 y 1 and x 2 = Z 2 y 2. Proof of Lemma 6. The proof is the same as that of Lemma 3, where the threshold values follow from Lemma 5: Proof of Theorem 2. Let x be the j-th column of the matrix X, for some j ∈ {2, . . ., k}. In view of Lemma 6, this is the eigenvector associated with eigenvalue λ j ∈ (µ j−1, µ j). In view of Lemma 4, all entries of x corresponding to blocks of size n 1, n 2..., n j−1 have the same sign, the other having the opposite sign. In this section, we present more extensive experimental . Tables 4 and 5 present for the same experiment as in Table 2 but for different values of K, namely K = 2 (bisection of the graph) and K = K truth /2 (half of the ground-truth value). As for K = K true, regularization generally improves clustering performance. However, the optimal value of α remains both dataset dependent and metric dependent. Note that, for the NG and WS datasets, the clustering remains trivial in the case K = 2, one cluster containing all the nodes, until a certain amount of regularization. Table 6 presents the different scores for both types of regularization on the NG dataset. As we can see, preserving the bipartite structure of the graph leads to slightly better performance. Finally, Table 7 shows the impact of regularization in the presence of noise for the NG dataset. The are similar as for the WS dataset: regularization makes the spectral embedding much more robust to noise.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1l_0JBYwS
Graph regularization forces spectral embedding to focus on the largest clusters, making the representation less sensitive to noise.
The exposure bias problem refers to the training-inference discrepancy caused by teacher forcing in maximum likelihood estimation (MLE) training for auto-regressive neural network language models (LM). It has been regarded as a central problem for natural language generation (NLG) model training. Although a lot of algorithms have been proposed to avoid teacher forcing and therefore to alleviate exposure bias, there is little work showing how serious the exposure bias problem is. In this work, we first identify the auto-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias. We then develop a precise, quantifiable definition for exposure bias. However, according to our measurements in controlled experiments, there's only around 3% performance gain when the training-inference discrepancy is completely removed. Our suggest the exposure bias problem could be much less serious than it is currently assumed to be. Language model (LM) is a central module for natural language generation (NLG) tasks such as machine translation , dialogue response generation, image captioning , etc. For decades, maximum likelihood estimation (MLE) has been the the most widely used objective for LM training. However, there is a popular belief in the natural language processing (NLP) community that standard MLE training will cause "exposure bias" and lead to a performance degradation during the test-time language generation. The exposure bias problem refers to the following discrepancy between MLE training and test-time generation for language models: During training, the language model predicts the next word conditioned on history words sampled from the groundtruth data distribution. And during generation, the model generates words conditioned on history sequences generated by the model itself. However, due to the exposure to real data during training, the language model is biased to only perform well on the ground-truth history distribution. As a , during generation the errors will accumulate along the generated sequence, and the distribution generated by the model will be distorted. The forced exposure to ground-truth data during training is also referred to as "teacher forcing". Given its defintion, the exposure bias problem could rise in the general cases when the model needs to make a sequence of decisions or generations (e.g. music/pixel/speech generation ). In this work, we focus on the task of language generation, because the exposure bias problem is originally proposed in this field , and has since attracted huge research attention. In order to avoid teacher forcing, many training algorithms (; ; ; ; ; ; ; ; ; ; ;) have been proposed as alternatives to MLE training. Most of these works utilize techniques from generative adversarial network (GAN) or reinforcement learning (RL) . In this paper, we refer to these algorithms as non-MLE methods or text GANs. Despite the huge research efforts devoted to alleviate exposure bias, surprisingly, its existence or significance is much less studied. In particular, to the best of our knowledge, no existing work Table 1: Samples of a MLE-trained STOA transformer LM when fed with different types of length-10 history prefix. To save space, we omitted the first 7 words of the random history. attempts to directly show the seriousness of exposure bias in an empirical or theoretical way. This work is motivated by the belief that a good solution should be built upon a testable and quantifiable problem definition. In this rest of this paper, we first identify the "self-recovery" ability of popular LM models, which casts doubt on the original claim of exposure bias. We then develop a precise and quantifiable definition of exposure bias, and validate its seriousness in controlled experiments. To study the seriousness of exposure bias in standard MLE LM training, we first stress that the following methodology, although tempting, is wrong: If we can rigorously show that the non-MLE methods proposed to avoid teacher forcing do indeed bring solid generation performance gain, then we can conclude exposure bias is a meaningful problem for the original MLE training. The reason is that we typically do not know the exact underlying reason for the performance gain. For example, despite the huge success of the batch normalization technique in deep learning, whether "internal covariate shift" (which is the motivation of batch norm) exists in deep neural network training remains a question . Therefore, in this work we seek a direct way to validate the seriousness of exposure bias. We focus on the following informal claim that immediately follows from the original definition of exposure bias: During generation, if we set the history distribution to be the ground-truth data distribution instead of the model's own distribution (now that there is no discrepancy between training and testing), then the model's language generation quality should be much better (we will formalize this notion in Section 4 and 5). We start with the following qualitative analysis. We feed a MLE-trained transformer LM on wiki-103 data-set with four kinds of prefixes: model's own samples, data samples, shuffled (word-level) data samples or samples from a uniform random distribution. Then we let the model complete the sentence given these prefixes as history. We list some samples in Table 1 and more in Appendix A (this experiment is also repeated for a LSTM LM). Assuming the seriousness of exposure bias, we expect the quality of generated sentence-completion samples with real-data prefixes to be significantly better than the ones from prefixes of model samples. However, by manual inspection, we do not observe noticeable differences in sample quality. More surprisingly, the model is still able to generate relevant and fairly high-quality samples from shuffled prefixes. Even in the extreme case where random sequences are fed, the model is able to generate reasonable sentences. Due to the recent increasing interest of solving exposure bias in the field of neural machine translation (NMT) , we repeat the above experiment in a standard NMT setting in Appendix A, and get very similar observations. These experiments clearly show that the MLE-trained auto-regressive LMs have the self-recovery ability, i.e. the model is able to recover from artificially distorted history input, and generate reasonably high-quality samples. This phenomenon is clearly in contradiction with the popular claim of exposure bias, that the error induced by the mismatch between history and data distribution should accumulate during the generation process. Motivated by these experiments, in the following sections, we turn to more rigorous methods to quantify the significance of exposure bias. Note that our quantification approaches will be independent of the training procedure and only require inference from the trained model. The task of auto-regressive language modelling is to learn the probability distribution of the (l + 1) th word W l+1 in a sentence conditioned on the word history W 1:l:= (W 1, . . ., W l). Here, we use the uppercase W i ∈ V to denote a discrete random variable distributed across the vocabulary V. The lower-case w is used to denote some particular word in V. Given a training data-set D consisting of sentences of length L, the standard MLE training minimizes the negative log-likelihood below: Note that in this work we assume all sentences are of length L for simplicity. We denote the generation distribution of the trained LM as P M, and the ground-truth data distribution as P D. Readers can assume P M refers to the generation distribution of a LSTM LM or a transformer LM trained with MLE objective, which is the major subject of this study. We will mainly present on LSTM based models to facilitate comparison with text-GAN works (listed in Section 1), which are mostly implemented on LSTM models. We will also provide with the transformer model, with very similar observations or measurements. Our quantification mainly relies on the measurements of the distance from the model's generation distribution to the data distribution. Hence we define the following notations to simplify expressions. Let P denote the set of probability distributions on the vocabulary V. Let d denote a distance measure between distributions (e.g. total variation distance), d: P × P → R ≥0. In this section, we propose an intuitive and seemingly correct quantification approach using marginal distributions. The approach can be applied to real-world text data experiments, but it has some lethal weak point. The discussion will lead us to our final precise definition of exposure bias in Section 5. Assuming a given history length l, we consider the marginal distribution of W l+1 from the following three random process: • Draw word sequences of length L from the data distribution P D. Denote the marginal distribution of the random variable at position l + 1 (W l+1) as P l+1 D|D, where • Draw word sequences of length L from the model distribution P M. Denote the marginal distribution of the random variable at position l + 1 as Under review as a conference paper at ICLR 2020 Denote the marginal distribution of the random variable at position l + 1 as P l+1 M |D, where By the definition of exposure bias, P l+1 M |M suffers from the training-testing discrepancy, while P l+1 M |D should be closer to the true distribution P l+1 D|D. To measure this discrepancy, define the marginal generation deviation (MGD) at history length l of history distribution P H with metric d as where P H ∈ {P M, P D} denotes the history distribution. MGD measures the deviation of the marginal distribution of W l+1 from ground-truth data distribution. Finally, we define the rate of exposure bias (EB-M) at history length l of model P M as the ratio (discrepancy) between the MGD measurements when two different history distributions are fed: For MLE-trained models, EB-M 1 is expected to be larger than 1, and larger EB-M indicates a more serious exposure bias problem for the trained model. For the metric d, we consider two popular probability metrics: total variation distance (denoted as d T V), and Jensen-Shannon divergence (denoted as d JS). In this section, we focus on answering the following question: "Does the EB-M measurement correctly reflect the significance of exposure bias?" In short, our answer is not really. The problem is that the distortion of the marginal P l+1 M |M is not only affected by the presumably existing exposure bias problem alone, but also by the mismatch between the history distribution P M from P D for W 1:l, which grows with the length of the history. Therefore, even if the measured EB-M is significantly larger than one, we can not conclude that exposure bias causes serious deterioration. We provide an example to illustrate this argument: Example 1. Suppose L = 2, and V = {A, B}. P D and P M are crafted as follows: However, the only problem P M has is the mismatch between the history distributions (P M and P D) for W 1. The next set of experiments also suggest that EB-M does not precisely reflect exposure bias. On the EMNLP-news data-set (specified in Appendix B), we compare EB-M measurements for several non-MLE training methods with the baseline MLE model. We include for Scheduled Sampling (SS) , Cooperative Training (CoT) , and Adversarial Ranking (RankGAN) . We provide implementation details for non-MLE methods in Appendix C. Intuitively, these methods will cause the model to be biased to behave well with model samples as history, instead of data samples. Therefore, we expect EB-M measurement for non-MLE trained models to be smaller than MLE trained models. However, Figure 1 shows that the measurements for different training frameworks are almost the same. We believe the reason is that the EB-M measurements are only reflecting the trivial mismatch between the history distributions. Is it possible that the original definition of exposure bias exactly refers to this mismatch between the model and data history distributions? However, note that this mismatch is inevitable for any imperfect model, and non-MLE training algorithms can not solve it. We believe a better, more precise definition is needed to discriminate exposure bias from this trivial mismatch. Motivated by this view, we propose a second approach in the section below. Following the discussion in the last section, we wish our measurement to be independent of the quality of the history distribution. In light of that, we design a quantity to measure the model's conditional generation quality. Let P H ∈ {P M, P D} denote the history distribution as in the MGD definition. With history length l fixed, we define the conditional generation deviation (CGD) with history distribution P H for P M using metric d as: where we assume that P D (· | W 1:l)) is computable, and use it to measure the quality of the model's conditional distribution. For the choice of the distribution distance d, in addition to d T V and d JS, we introduce greedy decoding divergence (d GD) defined as: where 1 is the indicator function, and P, Q ∈ P. The distance d GD 2 reflects the model's accuracy during greedy decoding. Similar to MGD, exposure bias should imply a significant gap between CGD(P M |M, l, d) and CGD(P M |D, l, d). We again define rate of exposure bias at history length l with metric d to be: For our definition of EB-C, a natural question is why we only focus on the generation distribution of the very next word. The reason is we want to precisely measure how the error caused by the history part affect the generation part, by keeping them separate. If we measure the deviation of, for example, two sampled tokens, the definition will be confusing: Because the second sampled token will be affected not only by the accumulated error induced by the history (sampled from the model), but also by the first generated token as history. To get a better understanding of the intuition behind the definition of EB-C, we recommend readers to read Appendix A about our NMT experiment. Since CGD requires inference for ground-truth data distribution P D, we first consider experiments in a synthetic setting. In text-GAN literature , a randomly-initialized one-layer LSTM model with hidden dimension of 32 is usually used as P D in synthetic experiments (we denote this setting as M random 32). However, the model is small-scale and does not reflect any structure existing in real-world text. To improve upon this approach, we take the MLE baseline model trained on EMNLP-news data (described in Appendix B) as P D in this synthetic setting. We denote the data model (P D) as M news 512. We then train two LSTM LM (P M) with different capacities using samples from the data model, with the standard MLE objective. One is a one-layer LSTM with hidden width of 512 (denoted as LSTM-512), the other one is with hidden width of 32 (denoted as LSTM-32). We train P M for 100 epochs using the Adam optimizer with learning rate 0.001. In each epoch, 250k sentences (same to the size of the original EMNLP-news data) of length L = 50 are sampled from M news-512 as training data to avoid over-fitting. We show perplexity (PPL) of the trained models in Appendix F. Finally, EB-C is calculated using 100k 3 samples from P M and P D. In Figure 2, we show EB-C measurements with different metrics d m, and the two models give similar . It is shown that EB-C has a steady but slow increasing trend as history length increases. This is expected as a consequence of exposure bias, because P M deviates farther from P D as history length increases. However, the average value of EB-C is less than 1.03 (the largest average value is from d JS for the LSTM-512 experiment), meaning that the gap between CGD(P M |M, l, d) and CGD(P M |D, l, d) is not large. Also, note that in most NLG applications (such as machine translation or image captioning), the generated sequence typically has short length (less than 20). In that range of history length, the EB-C measurements that exposure bias only has minimal influence. In Appendix E, we repeat the experiment for a transformer LM , and get very similar EB-C measurements. These measurements imply a striking : (Informal) Even if all the bad effects from exposure bias for MLE LM training are removed, the relative performance gain is at most 3%. If the sequence length is not very long, the gain is less than 1%.. To dive deeper into the cause of the gap in CGD, we experiment with corrupted versions of P M as history distribution. We first specify a corrupt rate c ∈, and randomly substitute words in a history sample from P M to a "noise" word drawn uniformly from the vocabulary with probability c. Consequently, larger c will cause the history distribution to deviate farther from the groundtruth P D. In Figure 3, we show CGD measurement versus the corrupted history P corrupt M. Large gaps are observed between CGD(P M |M corrupt) and CGD(P M |D). Therefore, the small gap between CGD(P M |M) and CGD(P M |D) in Figure 2 from the small deviation between the history distribution P M and P D. In other word, P M has learned a "good enough" distribution that is able to keep it in the well-behaving region during sampling. With these observations, we conclude that, in the synthetic setting considered, exposure bias does exist, but is much less serious than it is presumed to be. Although there exists mismatch between the history distribution P M and P D, the mismatch is still in the model's "comfortable zone". In other words, the LSTM LM is more robust than exposure bias claims it to be. To concretize the this argument, we provide an example LM and show that MLE training is unlikely to generate models with a large EB-C value. Example 2. Again suppose L = 2, and V = {A, B}, the ground-truth data distribution is uniform on {AA, AB, BB, BA}. P M is crafted as follows:. Note that the model behaves bad when W 1 = A, which is of high probability during sampling. However, this crafted model is unlikely to be an outcome of MLE training. The fact that P M (·|W 1 = B) is better modeled indicates that in the training data more sentences begin with W 1 = B than W 1 = A. So MLE training should assign more probability to P M (W 1 = B), not the other way around 4. setting 5, as we find it hard to do a fast implementation of RankGAN for the LSTM-512 setting. We find that RankGAN and CoT gives lower EB-C measurements than MLE, which is expected, as these methods avoid teacher forcing. For CoT, at short 4 If we change to PM (W1 = A) = 0.1, then EB-C(PM, 1, dT V) will be 0.2, meaning that the model has better conditional generation performance during sampling 5 The MLE model is used as the pre-trained model for the RankGAN generator. The MLE model has an oracle NLL of 8.67, and RankGAN's oracle NLL is 8.55. Table 2: An illustration for the next word collection process. The choices are shuffled. The first history sample is from real data, and the second history sample is from the trained model. Table 3: EB-C measurements with human as P D. hisotry length, EB-C is even less than 1. We believe the reason is that CoT trys to make the model be biased to behave better when fed with model samples. However, SS gives worse EB-C measurements comparing to MLE, which we currently do not have a good explanation. We refer readers to for a discussion about the SS objective. To the best of our knowledge, this is the first direct empirical evidence that text GAN does indeed alleviate the exposure bias problem. It also indicates that EB-C correctly reflect the significance of exposure bias. We believe the reason for why EB-C is still not less than 1 is that, text GANs still rely on MLE pre-training a lot. In this section, we design experiments to efficiently estimate EB-C for a SOTA transformer LM with real human as P D, by utilizing the Amazon Mechanical Turk (AMT) platform. Given a MLE-trained LM as P M, by examining the definition of EB-C in Equation 9 and 7, it is clear the only obstacle is that we don't have access to P D (· | W 1:l) with a given history W 1:l. So, in this section, we focus on the greedy decoding divergence (d GD) metric (Equation 8), which only requires the turkers to give the most probable next word prediction, instead of the full distribution (which is clearly intractable). In our preliminary trials, we find it is still very hard for a person to guess the next word, even with real data history samples. The reason is that the vocabulary is very big, and the turkers may be not familiar with the context (e.g. wikipedia). To alleviate that problem, we design the following simplification: For a given history, we let the model output its top-5 next word prediction, then we only ask the turkers to choose among the 5 choices (the turker can also express that he/she thinks none of them is likely). Finally, we examine whether the turker's choice is indeed the model's top-1 prediction. We illustrate this process in Table 2. We use the code of Transformer-XL to train a SOTA transformer LM on the wiki-103 data-set. We favour the wiki-103 data-set because it is large-scale and has long (over 30 words) paragraphs, which is useful for the measurements of exposure bias. The model is a 16-layer transformer-xl model with hidden dimension of 410. Since the estimation of CGD(P M |D, l, d) requires large amounts of unseen real data samples, we use half of the wiki-103 training data (around 900k sentences and 50m words) to train the model P M, and save the other half as samples from P D. Other training configurations (learning rate, batch size, etc.) are not changed 6. The ing model P M has a test-set PPL of 27.81 (if trained on full training data, the PPL will be 24.02). We collect data to estimate EB-C at history length 10, 20, and 30. For each length and history model (P M or P D) pair, we collect 10k d GD samples (via next-word prediction) from turkers on the AMT platform. More details about the AMT setup are provided in Appendix D. The are shown in Table 3. The EB-C measurements are strikingly similar to the in our synthetic experiments in that, removing the training-testing discrepancy only gives around 2% of relative performance gain. This further strengthens our claim that exposure bias is only a minor problem for MLE-based LM training. Several recent works attempt to carefully evaluate whether the non-MLE training methods (e.g. adversarial training) can give superior NLG performance than standard MLE training for RNN LM. tunes a "temperature" parameter in the softmax output, and evaluate models over the whole quality-diversity spectrum. proposes to use "Reverse Language Model score" or "Frechet InferSent Distance" to evaluate the model's generation performance. proposes a method for approximating a distribution over tokens from a GAN, and then evaluate the model with standard LM metrics. These works arrive at a similar : The general performance of Text GANs is not convincingly better, or even worse, than standard MLE training. Hence to some extent, they imply that exposure bias may be not a serious problem in MLE training. However, as we argued in Section 2, one can not draw direct about exposure bias with these . For example, it is also possible that exposure bias is indeed serious for MLE training, but text GAN does not solve the problem well enough. In this work, we first identify the self-recovery ability of MLE-trained LM, which casts doubt on the seriousness of exposure bias, which has been regarded as a central problem for MLE training by the LM community. We then explore two intuitive approaches to quantify the significance of exposure bias for LM training. The first quantification EB-M relies on the marginal generation distribution and reveals some vagueness in the original definition of exposure bias. We argue that we should focus on the model's generation performance in terms of its conditional distribution and propose a second quantification EB-C, which we regard as the precise definition for exposure bias. We design a evaluation of EB-C at different history length with real human (turkers from AMT) as the data model, for a SOTA transformer LM. It is shown that removing the training-testing discrepancy only gives around 2% of performance gain. Our synthetic experiments also gives very similar measurements. By analyzing EB-C measurements with perturbed history samples, we hypothesise that although the mismatch between the data and model distribution for history prefix exists, it is still in the model's "comfortable zone". With these , we claim that on the contrary to the popular belief, exposure bias is only a minor problem in MLE-based LM training. To wrap up, we discuss the fundamental question "Is MLE training really biased?", from the perspective of objective functions. Note that the MLE objective can be re-written as: where D KL denotes the Kullback-Leibler divergence, and θ denotes the trainable parameters in P M. Therefore, MLE training is minizing the divergence from P M, which is exactly the model's sampling distribution, from P D. While it's true that the training is "exposed" to data samples, we can not simply deduce the objective is "biased". We want to end our discussion with two remarks. First, the proposed quantification approaches should not be used as the only metric for NLG. For example, a position-aware uni-gram LM, which generates words independent of previous context, has no exposure bias problem and can pass our test easily. Second, the intention of this work is not to discourage researchers from exploring non-MLE training algorithms for LM. It is completely possible that an training objective different from, can lead to better generation performance . However, though non-MLE algorithms avoid teacher forcing, these algorithms (using GAN or RL for example) are usually less stable and more difficult to tune. Given that the quantified measurement of exposure bias is insignificant, we think it should be questioned whether adopting these techniques to avoid exposure bias is a wise trade-off. In Table 6, we provide more samples of a MLE-trained transformer LM model (discussed in Section 2) when fed with different kinds of history. And in Table 7 we repeat the experiment for a LSTM-LM trained on the EMNLP-News data. In Table 4 we repeat the preliminary experiment in Section 2 for a standard NMT setting. We train a 6-layer transformer model with hidden dimension 1024 on the IWSLT'14 German to English data set. We feed the trained model with types of prefix during decoding which represents different level of training-decoding discrepancy. Note that the source input is kept intact. The is very similar (or more striking) to our language model experiment, the data prefix does not seem to help, and in the extreme case of random prefix, the model still generates fairly good translation. In Section 2 we summarize this observation as the auto-recovery ability. To interpret the UNREL3 , we should not directly compare the translation generated from unrelated prefix to the target translation. In fact, we cannot even compare part of it (e.g. the part after the length-3 prefix). Instead, we highlight the surprising fact that although the model is forced to begin (conditioned) with a wrong prefix, it still comes up with a reasonable translation. This is not an Table 4: A standard NMT transformer model fed with different types of length-3 history prefix. We did not do any cherry picking. The "@@" is because BPE tokenization is used. "DATA" means the first three output tokens are forced to be correct. "NORMAL" means no prefix is forced during decoding. "UNREL" means the first three tokens are forced to be from another random unrelated sentence (which is wrong but grammatical). "RAND" means the first three tokens are completely random words. easy task even for human translators, yet the model does fairly well. Again, this contradicts with the "exposure bias" hypothesis that a MLE-trained LM will produce a increasingly deviated sequence when initiated with a non-perfect prefix. Actually, during generation the model self-corrects the error in the prefix. It is also the major motivation of our proposed EB-C measurement (Section 5), which is based on the view of measuring distances between conditional distributions. One problem in the implementation of EB-M is to estimate the described marginal distributions of W l+1. We adopt a simple sample-and-count method: P l+1 D|D is estimated by the distribution (histogram) of W l+1 from a number (to be specified below) of sentences sampled from the data distribution. For P l+1 M |M and P l+1 M |D, we first draw a number of history samples W 1:l from the corresponding history model (model distribution and data distribution respectively). We then feed sampled history sequences into the trained model and estimate the marginal distribution of the (l + 1) th word by averaging the predicted distribution P M (·|W 1:l). We measure EB-M for MLE-trained LSTM LM on two popular data-sets: EMNLP-news (EMNLP 2017 WMT News Section), and wikitext-103 7. For EMNLP-news we set L = 20, and only use data samples whose length is longer than L. The ing training/validation/test set has 268k/10k/10k sentences. The vocabulary is of size 5k. We use the 10k samples in the test set for evaluation of EB-M. Note that the EMNLP-news data-set is widely used in text GAN literatures;. We train a one-layer LSTM LM of hidden dimension 512 as the MLE baseline model for EMNLP-news. For wikitext-103, we set L = 50, and regard a paragraph in the original data as a long sentence. Further, we use half of the data for LM training, and utilize the other half for EB-M evaluation. The ing training/validation/test/evaluation set has 300k/1.5k/1.5k/300k sentences. The vocabulary is of size 50k. We train a two-layer LSTM LM of hidden dimension 1024 as the MLE baseline model for wikitext-103. For MLE baseline model training, the Adam optimizer is used with learning rate 0.001, no Dropout is applied. The model is trained for 100 epochs. We first measure EB-M on the wikitext-103 data-set, which has large amount of evaluation data. The are shown in Figure 5. We provide EB-M measurements with metric d T V in Appendix E, as they are similar to those using metric d JS. It is shown that the measurements become stable when using 100k data/model samples. EB-M has an average value of 1.10, indicating a significant gap between the model's MGD when fed with history from P D or P M. Further, we observe a steady growth of EB-M along the length of history, which is expected as an outcome of exposure bias. However, as discussed in Section 4.2, these measurements can only show that the LM does have better (marginal) generation quality when fed with data prefixes, but does not provide informative information for the significance of exposure bias. We implement our MLE baseline and scheduled sampling (SS) in PyTorch. For SS, we use a linear decay schedule to move from complete teacher forcing to replace-sample rate of 0.1. We find that larger rate will give worse performance. For CoT, we use a PyTorch implementation in https://github.com/pclucas14/ GansFallingShort. We use a mediator model that has twice the size of the generator. We set M-step to be 4, and G-step to be 1. For RankGAN, we use a TensorFlow implementation in https://github.com/ desire2020/RankGAN. Note that in our non-MLE experiments, the generator model is set to be the same size with the baseline MLE model. We tune the non-MLE methods using the corpus-BLEU metric, which is widely used in text GAN literature. In this section we provide more details for the AMT evaluation discussed in Section 5.3. We show the HIT interface in Figure 6. Each HIT will include 10 pairs of context and its corresponding choices. Five of them are history samples from real data, and the other five is from the trained model. The history samples are mixed, so that the turker doesn't know whether the history sample is from real data or the model. The next-word choices are also shuffled. The history length of the context could be 10, 20, or 30. We collect around 10k HITs for each history length configuration. The same history sample is not repeated across the HITs. We limit each turker to do at most 200 HITs. For all history length configurations, there are around 300 unique turkers. As shown by Figure 7, most turkers conduct less than 20 HITs. In Figure 8, we show that we are able to get stable measurements of EB-C with 100k samples for the LSTM-512 synthetic experiment. In Figure 9 and Figure 10 we provide EB-M measurements with metric d T V discussed in Section 4.2, the are similar to those using metric d JS. In Figure 11, we provide EB-C measurements of a 3-layer transformer LM with 512 hidden dimension, in the synthetic setting. We show PPL for model trained on EMNLP-news data-set in Table 5. The MLE model for wiki-103 data-set discussed in Section 4.2 has PPL 84.58. Note that due to our special setting 8, our PPL is not directly comparable to state-of-art LM on these data-sets. 8 We only keep sentences of length longer than L, and for wiki-103, only half of training data is used. At the same time, she responded against the package of short-form compatible boats... Table 6: More samples of a STOA MLE-trained transformer LM (on the wiki-103 data-set) when fed with different kinds of history. To save space, we omitted the first 7 words of the random history. Model Samples as Hisotry → Model Samples it was only a pieces that had gone up to → the forest and forces the shoppers about their chronic young i mean we didn' t know what i haven →' t considered through, " she told bbc radio if he were the president -elect, he was → known that he would run a force in business at but these are not as tired of " the same → message that the harry actor does have been hours in first opinion the agent have taken four seconds, or → if they don' t only know anything, were " the economy of the uk is low enough of → people of defending where americans think that " brexit, the economy grew on 1. 6 % since the → us voted, and when it turned around 200 streets i was able to produce on my own, which → is good; now that the theatre i' ve " i' ve not buying boys i addressed many → nervous times before, as a teenager made me is we think about one -third of the struggles we → actually want to see those very well that even more the story of a album -which made public -→ was still fantastic, and for the second time in " the test comes up before tuesday and when we →' re feeling ahead again soon, " she posted a year on when he was last seen in his → home and he did not see him, his suffering brady has forced the 9 -known targets to get → all -of -12 gun migration and performing communication i asked if he himself did, i managed to → show all my charges at all, it used to Data Samples as Hisotry → Model Samples what this group does is to take down various different → players in the future and we play in paris we over 1, 600 a day have reached greece this → gone in 2013 and it planned to allow civilians on " we' re working through a legacy period, → and i am proud of the experience of the worker' the first time anyone says you need help, → you don' t have put accurate press into the out of those who came last year, 69 per → cent of women can really take the drive to avoid he has not played for tottenham' s first team → this season then and sits down 15 -0 with so you have this man who seems to represent this → bad story, which he plays minutes -because he cnn: you made that promise, but it wasn →' t necessarily at all the features he had in this is a part of the population that is unk → lucky to have no fault today, and it would they picked him off three times and kept him out → of the game and was in the field, the the treatment was going to cost $ 12, 000 → as a of the request of anyone who was but if black political power is so important, why → doesn' t we becomes the case that either stands local media reported the group were not looking to hurt → the animals, but would never be seen to say Table 7: Samples of a MLE-trained LSTM LM (on the EMNLP-news data-set) when fed with different kinds of history. To save space, we omitted the first 7 words of the random history.
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJg2fTNtwr
We show that exposure bias could be much less serious than it is currently assumed to be for MLE LM training.
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured. The study of emergent communication is important for two related problems in language development, both human and artificial: language evolution, the development of communication protocols from scratch BID27; and language acquisition, the ability of an embodied agent to learn an existing language. In this paper we focus on the problem of how environmental or pre-linguistic conditions affect the nature of the communication protocol that an agent learns. The increasing realism and complexity of environments being used for grounded language learning BID8 BID18 present an opportunity to analyse these effects in detail. In line with previous work on emergent communication, we are strongly motivated by the view that language derives meaning from its use BID39 BID37. This perspective especially motivates the study of language emergence in cases where co-operative agents try to achieve shared goals in game scenarios BID34 BID6 BID26, and is related to the study of multi-agent and self-play methods that have found great success in other areas of machine learning BID1 BID30. Here we focus on simple referential games, in which one agent must communicate to another a target object in the agent's environment. One of the most important properties of natural language is compositionality. Smaller building blocks (e.g. words, morphemes) are used to generate unbounded numbers of more complex forms (e.g. sentences, multi-word expressions), with the meaning of the larger form being determined by the meanings of its parts and how they are put together BID14. Compositionality is an advantage in any communication protocol as it allows in principle infinite expression through a finite dictionary and a finite set of combination rules. In emergent communication research, previous work has shown that agents can produce (somewhat) compositional protocols when engaging in language games BID34. However, the computational agents were typically situated in artificial worlds containing just a handful of objects, represented as disentangled, structured, and sometimes even atomic symbols, e.g. attribute-based or one-hot vectors BID2 BID5 BID13 BID0 BID26. However, humans receive raw sensorimotor hank you! h a n k y o u! rather than symbolic input, and little work to date has tested whether these findings carry over when agents are situated in less idealized worlds that bear more similarity to the kind of entangled and noisy environments to which humans are typically exposed. In this work, in the context of referential communication games (see FIG0, we contrast the of two studies that lie at the extremes of how much structure is provided by the environment. The first study (Section 3) focuses on symbolic representations, where objects are represented as bags-of-attributes; this representation is inherently disentangled since dimensions encode individual properties. The second study (Section 4) considers raw perceptual input, hence data that more closely resembles what humans are exposed to. Clearly, the latter is a more challenging and realistic scenario as the computational agents are operating on entangled inputs with no pre-coded semantics. Crucially, both studies use the same referential game setup, the same learning procedure (policy learning methods) and the same neural network agent architectures. We show that reinforcement learning agents can successfully communicate, not only when presented with symbolic and highly structured input data, but (and more importantly) even when presented with raw pixel input. This opens up the possibility of more realistic simulations of language emergence. We successfully use the learning signal from the referential game to train agents end-to-end, including cases where the agents need to perform visual processing of images with a convolutional neural network. However, we find that the agents struggle to produce structured messages when presented with entangled input data BID3 due to the difficulty of uncovering the true factors of variation, corroborating the hypothesis of BID32 that structured (compositional) language is most likely to emerge when agents perceive the world as structured. The referential game is implemented as an instance of multi-agent co-operative reinforcement learning, in which two agents take discrete actions in their environment in order to maximize a shared reward. The referential game is a variant of the Lewis signaling game BID23, which has been extensively used in linguistic and cognitive studies in the context of language evolution (e.g., BID7 ; BID35 BID33 BID22 .Figure 1 provides a schematic description of our setup. First, a speaker is presented with a target object (highlighted as CAR in the symbolic example on the left, and highlighted as the far right image in the pixel example on the right). Then, by making use of an alphabet consisting of primitive discrete symbols ("22", "10", "0","2"), the speaker constructs a message describing that object ("22 2 0"). We will refer to the set of all distinct messages generated by the speaker as their lexicon or protocol. Finally, the listener is presented with the target and a set of distractor objects, and-by making use of the speaker's message-has to identify the target object from the set of candidate objects. Communicative success is defined as the correct identification of the target by the listening agent. Formally, the attribute-based object vectors (disentangled) or the pixel-based images (entangled) are the set of pre-linguistic items W = {o 1, . . ., o N}. From this set we draw a target t ∈ W and subsequently DISPLAYFORM0 The speaker has only access to the target t, while the listener receives candidate set C = t ∪ D, not knowing which of the elements in C is target t. The speaker encodes t into a dense representation u using an encoder f S (t, θ S f). The function of this encoder depends on the type of pre-linguistic data used and is discussed separately for each study. Given an alphabet A of discrete unit symbols (akin to words) and u, the speaker next generates a discrete, variable-length, bounded message m by sampling symbols from a recurrent policy π DISPLAYFORM0 ). The sequence generation is terminated either by the production of a stop symbol or when the maximum length L has been reached. We implement the decoder as a single-layer LSTM BID19. Note that the symbols in the agents' alphabet A have no a priori meaning; rather, these symbols get grounded during the game. The listening agent uses a similar encoder to the speaker but has independent network weights (θ L f). Applying this encoder to all candidate objects in a set DISPLAYFORM1 For encoding the message m, we use a single-layer LSTM, denoted h L, which produces an encoding z: DISPLAYFORM2 ). Given encoded message z and candidates U, the listener predicts a target object t ∈ C following a policy π L implemented using a non-parametric pointing module; this module samples the predicted object from a Gibbs distribution computed via the dot product between vector z and all encoded candidates u ∈ U. See Appendix B for information regarding the agents' architecture. At inference time, we replace the stochastic sampling of the speaker's message and the listener's stochastic pointing module with deterministic processes. For the pointing module, the object with the highest probability is chosen. For the speaker's message, this is generated in a greedy fashion by selecting the highest-probability symbol at each step. All weights of the speaker and listener agents, θ = {θ DISPLAYFORM0 are jointly optimized while playing the game. We emphasize that no weights are shared between the speaker and the listener, and the only supervision used is communicative success, i.e. whether the listener identified the correct target. The objective function that the two agents maximize for one training instance is: DISPLAYFORM1 where R is the reward function returning 1 if t = t (if the listener pointed to the correct target) and 0 otherwise. To maintain exploration in the speaker's policy π S of generating a message, and the listener's policy π L of pointing to the target, we add to the loss an entropy regularization term BID25. The parameters are estimated using the REINFORCE update rule BID38 Table 1: Commumicative success (training accuracy in percentage) with varying maximum message length. alphabet size denotes the effective size of the symbol set used from a maximum of 100. lexicon size is the effective number of unique messages used. topographic ρ reports the structural similarity in terms of Spearman ρ correlation between the message and the object vector space. All Spearman ρ correlations throughout the paper are significant with p < 0.01. We first present experiments where agents are learning to communicate when presented with structured and disentangled input. We use the Visual Attributes for Concepts Dataset (VisA) of BID29, which contains human-generated per-concept attribute annotations for 500 concrete concepts (e.g., cat, sofa, car) spanning across different categories (e.g., mammals, furniture, vehicles), annotated with 636 general attributes (e.g., has tail, is black, has wheels). We disregarded homonym concepts (e.g., bat), thus reducing our working set of concepts to 463 and the number of attributes to 573 (after eliminating any attribute that did not occur with the working concepts). On average, each concept has 11 attributes. All pre-linguistic objects are represented in terms of binary vectors o ∈ {0, 1} 573. Note that these representations do carry some inherent structure; the dimensions in the object vectors are disentangled and so each object can be seen as a conjunction of properties. Speaker and listener convert the pre-linguistic representations to dense representations u by using a single-layer MLP with a sigmoid activation function. In all experiments, we set the number of candidate objects K to five, meaning there were four wrong choices per correct one (ing in a 20% random baseline). Inspired by BID21, who show that non-compositional language emerges in the case of overcomplete alphabets, we set the size of alphabet A to 100 symbols, which is smaller than the size of the set of objects. We first report model performance on the training data, comparing different settings for the maximal allowed message length (2, 5 or 10 symbols). Results are presented in Table 1 (ignore the last row topographic ρ which will be explained in later sections).In the case of the shortest message settings (maximum length 2), our trained agents on average only develop a protocol of 31 unique messages used to describe 363 training concepts (leaving aside 100 for testing). This indicates high levels of ambiguity, with each message being used to denote 11 concepts on average. Interestingly, recent findings suggest that ambiguity is a design feature of language that prevents the inefficient use of redundant codes, since some of the message content can be extracted from context: "the most efficient communication system will not convey information already provided by the context" BID28. In our case, we do no explicitly encode any bias towards ambiguity. We hypothesize that ambiguity arises due to the difficult exploration problem that agents are faced with, in combination with the fact that ambiguous protocols present a good local optimum that is over-represented in the hypothesis search space. As a , in the absence of environmental pressures (e.g., a high number of carefully constructed distractors) a suboptimal policy can still achieve a reasonably high accuracy (92%), making it even harder during training to escape from such a solution. In classic signaling games, this polysemy phenomenon manifests itself as different states receiving the same signal and is termed partial pooling equilibrium BID31. Perhaps rather counterintuitively, Skyrms (p.131) suggests that a way to obtain communication protocols that are robust to this type of local communication minima is to allow the invention of new signals, essentially increasing the search space of signals. Motivated by this suggestion, we play variants of the game in which we allow the agents to produce messages of greater maximum length (5 and 10), which leads to improved communicative success (98.2% and 98.5% respectively). We observe that the number of messages in the protocol increases from 31 to 293 and 355, respectively, reducing the average number of concepts a message can denote from 11 concepts to (approximately) 1 concept. In the real world, when speakers refer to cats, listeners would likely be in a situation where they had to discriminate a cat in the context of a couch or a dog, rather than in the context of a mirror or a cow.2 Simply put, objects in the world do not appear in random contexts, but rather there is regularity in the distribution of situational and visual co-occurrences. This property of the world is typically not captured in referential games studied in the language emergence literature, with distractors usually drawn from a uniform distribution. We address this issue and design an additional experiment with distractors sampled from a targetspecific context distribution reflecting normalized object co-occurrence statistics. Co-occurrence data is extracted from the MSCOCO caption dataset BID24. This leads to more plausible distractor sets with, for instance, the target goat more likely being mixed with sheep and cow as distractors rather than bike or eggplant. We find that the distractor selection process (uniform vs context-dependent) affects the language learning dynamics; see FIG1 for training curves for different experimental configurations. While the non-uniform distractor sampling of the context-dependent setting can be exploited to learn a degenerate strategy -giving up to 40% communicative success shortly after the start of trainingsubsequently learning under this scenario takes longer. This effect is likely a combination of the local minimum achieved by the degenerate strategy of picking a target at random from only the topically relevant set of distractors, which initially makes the problem easier; however, the fact that the co-occurrence statistics tend to align with the feature vectors, means that similar objects are more likely to appear as distractors and hence the overall game becomes more difficult. We now consider the question of how objects denoted by the same (ambiguous) message are related. When the context is drawn uniformly, object similarity is a predictor of object confusability, as similar objects tend to be mapped onto the same message (0.26 and 0.43 median pairwise cosine similarities of objects that received the same message as computed on the VisA space, for maximum message length 2 and 5, respectively). In the non-uniform case, we observe object confusability to be less influenced by object similarity (0.15 and 0.17 median pairwise cosine similarities of objects that received the same message, for maximum message length 2 and 5, respectively), but rather driven by the visual context co-occurrences. Simply put, in the non-uniform case confusability is less influenced by similarity since the agents must learn to distinguish between objects that naturally co-occur (e.g. sheep and goat). Thus, the choice of distractors, an experimental design decision that in existing language emergence literature has been neglected, has an effect on the organization (and Table 2 : Communicative success (acc in percentage) of agents evaluated on training (first row) and novel (last three rows) data. lexicon size column reports the percentage of novel messages (i.e., messages that were not used during the training).potentially the naturalness) of the emerged language, for example as reflected in the semantics of ambiguous or homonym words in the language. Quantifying the degree of compositionality and structure found in the emerged language is a challenging task; to the best of our knowledge, there is no formal mathematical definition of compositionality that would allow for a definitive quantitative measure. Thus, research on this topic usually relies on defining necessary requirements that any language claiming to be compositional should adhere to, such as the ability to generalize to novel situations BID2 BID13 BID21. We adopt a similar strategy by measuring the extent to which an emerged language is able to generalize to novel objects (Section 3.3.1). Moreover, we also report quantitative (Section 3.3.2) using a measure of message structure proposed in the language evolution literature BID6 BID10. We perform experiments where trained agents from Section 3.1 are exposed to different types of unseen objects, each of them differing to the degree to which the unseen objects resemble the objects found in the training data. In the test scenario, objects come from the same data distribution as the training data, but were not presented to the agents during training (e.g., a mouse); in the unigram chimeras scenario, the novel objects are constructed by sampling properties from a property-based distribution inferred from the training data, thus breaking any feature correlation (e.g., a mouselike animal with wheels); in the uniform chimeras scenario, the novel objects are constructed by uniformly sampling properties (e.g., a square red furry metallic object). Table 2 reports the communicative success. While there is a drop in performance for unseen objects, agents are performing above random chance (20%). The emerged language is indeed able to generalize to unseen objects; however, the degree of generalization is a function of the similarity between the training and unseen objects, thus ing in the uniform chimeras setting obtaining the lowest performance. Moreover, we observe examples of productivity, a key feature of compositionality. At test time, speakers are able to concoct novel messages on-the-fly (i.e., messages that are not part of their lexicon induced during training) to describe unseen objects. See the last three rows of Table 2, and the lexicon size column, for the percentage of novel messages. Even though listeners were not trained to associate novel messages with novel objects, they are still able comprehend such messages and correctly identify the target object. In the test data and length 10 cases, novel messages account for almost all of the generated messages, but with performance at 81.6%, providing evidence of the structure found in the messages. Given a set of objects, their meanings and the associated signals, BID6 define topographic similarity to be the correlation of the distances between all the possible pairs of meanings and the corresponding pairs of signals. compositional languages being higher than that of holistic. The intuition behind this measure is that semantically similar objects should have similar messages. To compute this measure, we first compute two lists of numbers: (i) the Levenshtein distances between all pairs of objects' messages; and (ii) the cosine similarity between all pairs of objects' VisA vectors. Given these two lists, the topographic similarity is defined as their negative Spearman ρ correlation (since we are correlating distances with similarities, negative values of correlation indicate topographic similarity of the two spaces). Intuitively, if similar objects share much of the message structure (e.g., common prefixes or suffixes), and dissimilar objects have little common structure in their respective messages, then the topographic similarity should be high, the highest possible value being 1.Results presented back in Table 1, in the topographic ρ column, show that topographic similarity is positive in all experimental setups, indicating that similar objects receive similar messages (p < 0.01, permutation test). A qualitative analysis of the messages generated in the length 10 and training data cases showed that, for example, 32% of the mammal objects had as a message prefix the bigram'95#10'; 36% of vehicle objects had'68#95'; and 11% of tool objects had'0#61', suggesting that these prefix bigrams encode category-specific information. Next, for each object pair, we calculate their Levenshtein message distance and respective cosine similarity, and plot in Figure 3 (right), for each distance, the average cosine similarities of the pairs with that distance (this is done for the length 10 and training data experiment). We observe that there is a clear relation between message similarity and meaning similarity (as measured by overlap in the VisA properties). In Figure 3, we also plot a similar correlation curve for an emerged language obtained by producing messages with randomly initialized and untrained speaker/listener architectures. This emerged language is at random in terms of communicative success; however, the generated messages do show signs of structure, since similar objects obtain somewhat similar messages. This seems to suggest that structured and disentangled pre-linguistic representations are, perhaps, a sufficient condition for the emergence of structured language, especially in neural network-based agents which, due to the nature of representation and information flow, favor similar inputs to trigger similar outputs. In this section, we present experiments in which agents receive as input entangled data in the form of raw pixel input, and have to learn to perform visual conceptual processing guided by the communication-based reward. We use a synthetic dataset of scenes consisting of geometric objects generated using the MuJoCo physics engine BID36. We generate RGB images of resolution 124 × 124 depicting single object scenes. For each object, we pick one of eight colors (blue, red, white, black, yellow, green, cyan, magenta) and five shapes (box, sphere, cylinder, capsule, ellipsoid) ing in 40 combinations, for each of which we generate 100 variations, varying the floor color and the object location in the image. Moreover, we introduce different variants of the game: game A with 19 distractors; game B with 1 distractor; game C with 1 distractor, and with speaker and listener having different viewpoints of the target object (the target object on the listener's side is in a different location); game D with 1 distractor, with speaker and listener having different viewpoints, and with balanced numbers of shapes and color (obtained by downsampling from 8 colors to 5 and removing any image containing objects of the 3 disregarded objects). For each game, we create train and test splits with proportions 75/25 (i.e., 3000/1000 for games A and B, and 1850/650 for games C and D).Pre-linguistic objects are presented in the form of pixel input, o ∈ 3×124×124. Speaker and listener convert the images o to dense representations u, each of them using an 8-layer convolutional neural network (ConvNet). Crucially, we do not pre-train the ConvNets on an object classification task; the only learning signal is the communication-based reward. Despite this fact, we observe that the lower layers of the ConvNets are encoding similar information to a ConvNet pre-trained on ImageNet BID11.3 Conceptually, we can think of the whole speaker/listener architecture as an encoder-decoder with a discrete bottleneck (the message). Given our initial positive findings, this reward-based learning signal induced from the communication game setup could be used for classagnostic large-scale ConvNet training. Moreover, we find that, even though no weights were shared, the agents' conceptual spaces get aligned at different levels, reminiscent of theories of interactive conceptual alignment during dialogue BID15 ) (see Appendix A for the related experiment). Unlike the experiments of Section 3, where agents start from disentangled representations, starting from raw perceptual input presents a greater challenge: the agents have to establish naming conventions about scenes, while at the same time learning to process the input with their own visual conceptual system. Since we do not pre-train their ConvNets on an object recognition task, the dense representations u used to derive the message contain no bias towards any image-or scene-specific information (e.g, object color, shape or location). The extraction of visual properties is thus driven entirely by the communication game. This contrasts with the cases of BID16 and BID22 who use pre-trained visual vectors, and qualitatively observe that the induced communication protocols encode information about objects. Table 3 presents the in terms of communicative train and test success (see Appendix C for additional experiments when having access to gold object attribute classifiers). Moreover, we also report the topographic similarity (column topographic ρ) between the symbolic attribute-based representations of scenes (floor color, object color, shape and location) and the generated messages. Overall, despite the challenges posed in this setup due to the raw nature of the data, performance across all games is well above chance, indicating that reinforcement learning agents trained end-toend are able to establish a communication protocol in this grounded environment. In game A, the agents reach 93.7% accuracy, with their lexicon consisting of 1068 messages, describing 3000 training objects. Most importantly, as captured by the positive topographic similarity, agents produce messages that respect (even to a limited degree) the compositional nature of scenes (i.e., objects as bags-of-attributes), indicating that similar scenes receive similar messages. Indeed, by examining their protocol (see Table 4), we find that messages encode in a structurally consistent way information about absolute location of objects, with the message prefix and suffix denoting the horizontal and vertical co-ordinate, respectively. Interestingly, this communication strategy is also typically followed by human players of referential games BID20. 3 Specifically, for all 4000 images we compute two sets of activations, one derived from the speaker's ConvNet and one from a pre-trained ResNet model BID17. We then compute all pairwise cosines in the speaker's ConvNet and ResNet space and correlate these values. We find Spearman ρ to be in the range 0.6-0.7 between the first 3 layers of the speaker's ConvNet and the ResNet. Table 3: Communicative success of agents playing different games. Columns random, train and test report percentage accuracies. Column topographic ρ reports the topographic similarity between the symbolic representation of scenes and the generated messages (p < 0.01, permutation test).However, we find the emerged protocols to be very unstable and too grounded in the specific game situation. Small modifications of the game setup, while having close to no negative impact on the communicative performance, can radically alter the form, semantics and interpretability of the communication protocol. In game B, performance remains at the same level (93.2%) as game A. However, we observe that the protocol consists of 13 unique messages which do not reflect the objects' attributes (as indicated by the close to zero topographic similarity), thus making the messages harder to interpret (see FIG3 for randomly sampled examples). When we change the viewpoint of the agents in game C, biasing them against communicating about absolute object location, the players derive a compact communication protocol consisting of 8 unique messages that describe primarily color. Finally, when color and shape are balanced, as in game D, we still observe a bias towards describing the color of objects, with the five induced messages providing a perfect clustering of the objects according to their colors. Table 4: Accuracy of probe linear classifiers of speaker's induced visual representations (all accuracies are in percentage format). In order to investigate what information gets captured by the speaker's ConvNet, we probe the inferred visual representations u used to derive the message. Specifically, we design 4 probe classifiers for the color and shape of the object; object position which is derived by discretizing each co-ordinate into 3 bins; and floor color which is obtained by clustering the RGB color representation of the floor. For each probe, we performed 5-fold cross validation with a linear classifier, and report accuracy in Table 4. Overall, different games in visual representations with different predictive power; object position is almost always encoded in the speaker's visual representation, even in situations where location of the object is not a good strategy for communication. On the other hand, object shape seems to provide less salient information, despite the fact that it is relevant for communication, at least in the C&D games. As expected, the structure and semantics of the emergent protocols are a function of the information captured in the visual representations. The degree to which the agents are able to pull apart the objects' factors of variation impacts their ability to communicate about those factors, with the most extreme case being game D, where the message ignores the shape entirely. Thus, disentanglement seems to be a necessary condition for communication, at least in the case of pixel input. We presented a series of studies investigating the properties of protocols emerging when reinforcement learning agents are trained end-to-end on referential communication games. We found that when agents are presented with disentangled input data in the form of attribute vectors, this inherent compositional structure is successfully retained in the output. Moreover, we showed that communication can also be achieved in cases where agents are presented with raw pixel data, a type of input that aligns better with the raw sensorimotor data that humans are exposed to. At the same time, we found that their ability to form compositional protocols in these cases is hampered by their ability to pull apart the objects' factors of variations. Altogether, we were able to successfully scale up traditional research from the language evolution literature on emergent communication tasks to the contemporary deep learning framework, thus opening avenues to more realistic, and large scale, computational simulations of language emergence with complex image stimuli. During conversation, communication allows interlocutors to achieve interactive conceptual alignment BID15. We are able to communicate because we have established a common ground and our representations at different levels become aligned (e.g., participants mutually understand that "he" in the conversation refers to Bob). We investigated whether the agents' conceptual systems achieve a similar structural alignment. We measure the alignment in terms of Spearman ρ correlation of the intra-agent pairwise object cosine similarities as calculated via representing objects as activations from ConvNet layers. Interestingly, we observe a gradual increase in the structural similarity as we represent the objects with layer activations closer to the pixel space. Conceptual spaces are more aligned the closer they are to the raw pixel input (ρ = 0.97-0.91, depending on the game) and become more dissimilar as the representations become more abstract. We can draw the analogy to language processing, as first ConvNet layers perform some low-level processing analogous to phoneme recognition or word segmentation (and are thus more objective) while higher layers perform more abstract processing, vaguely analogous to semantics and pragmatics (thus, represent more subjective knowledge). In cases of successful communication, speakers' and listeners' conceptual spaces closer to the communication point are structurally very similar (ρ = 0.85-0.62, depending on the game), however this similarity drops dramatically in cases of failure of communication (ρ = 0.15). All LSTM hidden states of the "speaking" and "listening" module as well and the "seeing" prelinguistic feed-forward encoders (see Section 3), have dimension 50. The "seeing" pre-linguistic ConvNet encoders (see Section 4) has 8 layers, 32 filters with the kernel size 3 for every layer and with strides for each layer. We use ReLU as activation function as well as batch normalization for every layer. For learning, we used the Rmsprop optimizer, with learning rate 0.0001. We use a separate value of entropy regularization for each policy. For π S we use 0.01 and for π L we use 0.001. We use a mini-batch of 32. We assume a model which has access to perfect attribute classifiers for color, shape and object position, for the latter using a classifier operating on the discretized annotations we obtained in Section 4.2 after quantazing the real-valued object location. For computing the performance of this model using gold attribute classifiers, we first remove from the distractors any candidate not matching the target's attributes and them pick at random. We repeat this experiment for single attribute classifiers and their pairwise combinations. TAB4: Communicative success of trained models from Section 4.1 (train and test) as well as models with access to gold classifiers. All accuracies are in percentage format.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJGv1Z-AW
A controlled study of the role of environments with respect to properties in emergent communication protocols.
For understanding generic documents, information like font sizes, column layout, and generally the positioning of words may carry semantic information that is crucial for solving a downstream document intelligence task. Our novel BERTgrid, which is based on Chargrid by , represents a document as a grid of contextualized word piece embedding vectors, thereby making its spatial structure and semantics accessible to the processing neural network. The contextualized embedding vectors are retrieved from a BERT language model. We use BERTgrid in combination with a fully convolutional network on a semantic instance segmentation task for extracting fields from invoices. We demonstrate its performance on tabulated line item and document header field extraction. Documents often come in a variety of layouts and formats. For instance, a single document may contain isolated text boxes, tabular arrangements, multiple columns, and different font sizes. This layout can carry crucial semantic information. In classical natural language processing (NLP), however, the layout information is completely discarded as the document text is simply a sequence of words. Without access to the layout, a downstream task such as extraction of tabulated data can become much harder -and in some cases impossible to solve -since the necessary serialization may lead to severe information loss. Instead of working on the textual level, it is possible to directly apply methods from computer vision (CV) (e.g.) to work on the raw document pixel level which naturally retains the two-dimensional (2D) document structure. However, this is impractical, as a machine learning model would first need to learn textual information from the raw pixel data followed by the semantics. Recent approaches have designed a hybrid between NLP and CV methods for document intelligence: Chargrid , followed more recently by CUTIE , construct a 2D grid of characters or words from a document and feed it into a neural model, thereby preserving the spatial arrangement of the document. The symbols in the original document are embedded in some vector space, yielding a rank-3 tensor (width, height, embedding). Both papers report significant benefits of using such a grid approach over purely sequential 1D input representations, especially for semantically understanding tabulated or otherwise spatially arranged text like line items. With our contribution BERTgrid, we incorporate contextualized embedding into the grid document representation. More specifically, we use a BERT language model pre-trained on a large pool of unlabeled documents from the target domain to compute contextualized feature vectors for every word piece in a document. We demonstrate the effectiveness of BERTgrid on an invoice information extraction task from document tables and headers. We compare our to Chargrid and find significant improvements from 61.76% ± 0.72 to 65.48% ± 0.58 on an invoice dataset previously described in. Instead of constructing a grid on the character level and embedding each character with one-hot encoding as in , we construct a grid on the word-piece level and embed with dense contextualized vectors from a BERT language model. Formally, let a document be denoted by max | j ∈ {1, . . ., n}, consisting of n word pieces w (j), each of which is associated with a non-overlapping bounding... w (n) be the line-by-line serialized version of D. Using all word pieces j ∈ {1, . . ., n}, the BERTgrid representation of the document is defined as where d is the embedding dimensionality, e is the embedding function, and 0 d denotes an all-zero vector which we use for . We implement e using a pre-trained BERT language model. During evaluation of e, S is fed into the BERT model. The representation of the second-to-last hidden layer for the jth position is used to embed w (j) at position x, y in W. Fig. 2 (d) visualizes a BERTgrid tensor W. Our model pipeline is summarized in Fig. 1. A raw document image is first passed through an OCR engine to retrieve the words and their positions, i.e. D. We then serialize D ing in S which is subsequently passed through a BERT language model (BERT BASE configuration from) to get a contextualized vector for each word. Together with positional information from D, we construct W according to Eq. 1. For our downstream information extraction task, we use the same fully convolutional encoder-decoder neural network architecture and the same semantic segmentation and bounding box regression training tasks as , except W is the input to the neural network. Just like in , we obtain extracted document strings by comparing the predicted segmentation mask and bounding boxes with D. We interchangeably use BERTgrid for denoting just the document representation or the complete model consisting of input and network. As an extension to BERTgrid, we also construct a second model [C+BERTgrid] which combines the Chargrid and BERTgrid input representations. For that, we replicate the first convolutional block of the neural network to have a Chargrid and a BERTgrid branch. Both are subsequently merged by adding the two hidden representations. All models are trained for 800k iterations on a single Nvidia V100 GPU each. The BERT model with sequence length 512 is pre-trained for 2M steps and not fine-tuned on the downstream task. 3 Experiments As a concrete example for document intelligence, we extract key-value information from invoices without making assumptions on the invoice layout. We distinguish two kinds of fields: header fields and line item fields. The former includes invoice amount, invoice number, invoice date, and vendor name and address. The latter includes line item quantity, description, VAT amount/rate, and total price. It is usually contained in tabulated form and can occur in multiple instances per invoice. For each line item, all associated fields are grouped by a single bounding box per line item. Not all fields are always present on an invoice. We use the same dataset for training and testing as described in. It is comprised of 12k samples which we split 10k/1k/1k for training/validation/testing. The invoices are from a large variety of different vendors and the sets of vendors contained in training, validation, and testing samples are disjoint. Languages are mixed, with the majority being English. An example invoice along with its ground truth annotations is shown in Fig. 2. In addition, we use a second, much larger dataset comprised of 700k unlabeled invoices. This dataset is serialized ing in about 800 MB of plain text data. We use it for pre-training BERT from scratch as well as learning embedding vectors with word2vec. We use the evaluation metric from. This measure is similar to the edit distance, a measure for the dissimilarity of two strings. For a given field, we count the number of insertions, deletions, and modifications of the predicted instances to match the ground truth (pooled across the entire test set). The measure is computed as where N is the total number of instances of a given field occurring in the ground truth of the entire test set. This measure can be negative, meaning that it would be less work to perform the extraction manually. The best value it can reach is 1, corresponding to perfect extraction. Tab. 1 shows the in terms of the evaluation measure for different input representations. All are averaged over four randomly initialized training runs.; have shown that grid-based approaches like [Chargrid] or [Wordgrid] outperform conventional sequential models as well as purely image-based methods, so we use [Chargrid] as our baseline, with 61.76% ± 0.72. We assume the performance of BERTgrid stems from (i) embedding on the word-piece level and (ii) contextualization. Rather than learning to represent words first, the network directly gets access to semantically meaningful word(-piece)-level information. For instance, words such as avenue, street, and drive are very different when embedded on the character level, but will be mapped to approximately the same embedding vector. We observe that both [C+Wordgrid] and [C+BERTgrid] converge faster than [Chargrid] which supports this statement. During language model pre-training on the large, unlabeled dataset, knowledge about the language of invoices is distilled into the BERT model parameters. Compared to simpler, non-contextualized embedding methods such as word2vec, it has sufficient capacity to capture complex dependencies. This distilled knowledge is made accessible via the BERTgrid representation and eases the downstream task significantly. We acknowledge the BERT model has only access to S, not D. Future work could use 2D positional encodings to preserve the layout structure also during language model pre-training and inference.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1gsGaq9US
Grid-based document representation with contextualized embedding vectors for documents with 2D layouts
Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for classifiers. However, an attacker is not usually able to directly modify another agent's observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? We demonstrate the existence of adversarial policies in zero-sum games between simulated humanoid robots with proprioceptive observations, against state-of-the-art victims trained via self-play to be robust to opponents. The adversarial policies reliably win against the victims but generate seemingly random and uncoordinated behavior. We find that these policies are more successful in high-dimensional environments, and induce substantially different activations in the victim policy network than when the victim plays against a normal opponent. Videos are available at https://attackingrl.github.io. The discovery of adversarial examples for image classifiers prompted a new field of research into adversarial attacks and defenses . Recent work has shown that deep RL policies are also vulnerable to adversarial perturbations of image observations ). However, real-world RL agents inhabit natural environments populated by other agents, including humans, who can only modify observations through their actions. We explore whether it's possible to attack a victim policy by building an adversarial policy that takes actions in a shared environment, inducing natural observations which have adversarial effects on the victim. RL has been applied in settings as varied as autonomous driving , negotiation and automated trading . In domains such as these, an attacker cannot usually directly modify the victim policy's input. For example, in autonomous driving pedestrians and other drivers can take actions in the world that affect the camera image, but only in a physically realistic fashion. They cannot add noise to arbitrary pixels, or make a building disappear. Similarly, in financial trading an attacker can send orders to an exchange which will appear in the victim's market data feed, but the attacker cannot modify observations of a third party's orders. Figure 1: Illustrative snapshots of a victim (in blue) against normal and adversarial opponents (in red). The victim wins if it crosses the finish line; otherwise, the opponent wins. Despite never standing up, the adversarial opponent wins 86% of episodes, far above the normal opponent's 47% win rate. environments are substantially more vulnerable to adversarial policies than in lower-dimensional Ant environments. To gain insight into why adversarial policies succeed, we analyze the activations of the victim's policy network using a Gaussian Mixture Model and t-SNE . We find adversarial policies induce significantly different activations than normal opponents, and that the adversarial activations are typically more widely dispersed across time steps than normal activations. A natural defence is to fine-tune the victim against the adversary. We find this protects against that particular adversary, but that repeating the attack method finds a new adversary the fine-tuned victim is vulnerable to. However, the new adversary is qualitatively different, physically interfering with the victim. This suggests repeated fine-tuning might provide protection against a range of adversaries. Our paper makes three contributions. First, we propose a novel, physically realistic threat model for adversarial examples in RL. Second, we demonstrate the existence of adversarial policies in this threat model, in several simulated robotics games. Our adversarial policies reliably beat the victim, despite training with less than 3% as many timesteps and generating seemingly random behavior. Third, we conduct a detailed analysis of why the adversarial policies work. We show they create natural observations that are adversarial to the victim and push the activations of the victim's policy network off-distribution. Additionally, we find policies are easier to attack in high-dimensional environments. As deep RL is increasingly deployed in environments with potential adversaries, we believe it is important that practitioners are aware of this previously unrecognized threat model. Moreover, even in benign settings, we believe adversarial policies can be a useful tool for uncovering unexpected policy failure modes. Finally, we are excited by the potential of adversarial training using adversarial policies, which could improve robustness relative to conventional self-play by training against adversaries that exploit weaknesses undiscovered by the distribution of similar opponents present during self-play. Most study of adversarial examples has focused on small p norm perturbations to images, which discovered cause a variety of models to confidently mispredict the class, even though the changes are visually imperceptible to a human. Gilmer et al. (2018a) argued that attackers are not limited to small perturbations, and can instead construct new images or search for naturally misclassified images. argue that the near-ubiquitous p model is merely a convenient local approximation for the true worst-case risk. We follow in viewing adversarial examples more broadly, as "inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake." The little prior work studying adversarial examples in RL has assumed an p -norm threat model. and showed that deep RL policies are vulnerable to small perturbations in image observations. Recent work by generates a sequence of perturbations guiding the victim to a target state. Our work differs from these previous approaches by using a physically realistic threat model that disallows direct modification of the victim's observations. showed agents may become tightly coupled to the agents they were trained with. Like adversarial policies, this in seemingly strong polices failing against new opponents. However, the victims we attack win against a range of opponents, and so are not coupled in this way. Adversarial training is a common defense to adversarial examples, achieving state-of-the-art robustness in image classification . Prior work has also applied adversarial training to improve the robustness of deep RL policies, where the adversary exerts a force vector on the victim or varies dynamics parameters such as friction (; ;). Our defence of fine-tuning the victim against the adversary is inspired by this work. This work follows a rich tradition of worst-case analysis in RL. In robust MDPs, the transition function is chosen adversarially from an uncertainty set . solve the converse problem: finding the set of transition functions for which a policy is optimal. Methods also exist to verify controllers or find a counterexample to a specification. verify decision trees distilled from RL policies, while test black-box closedloop simulations. can even synthesise controllers robust to adversarial disturbances. Unfortunately, these techniques are only practical in simple environments with lowdimensional adversarial disturbances. By contrast, while our method lacks formal guarantees, it can test policies in complex multi-agent tasks and naturally scales with improvements in RL algorithms. We model the victim as playing against an opponent in a two-player Markov game . Our threat model assumes the attacker can control the opponent, in which case we call the opponent an adversary. We denote the adversary and victim by subscript α and ν respectively. The game M = (S, (A α, A ν), T, (R α, R ν)) consists of state set S, action sets A α and A ν, and a joint state transition function T: S × A α × A ν → ∆ (S) where ∆ (S) is a probability distribution on S. The reward function R i: S × A α × A ν × S → R for player i ∈ {α, ν} depends on the current state, next state and both player's actions. Each player wishes to maximize their (discounted) sum of rewards. The adversary is allowed unlimited black-box access to actions sampled from π v, but is not given any white-box information such as weights or activations. We further assume the victim agent follows a fixed stochastic policy π v, corresponding to the common case of a pre-trained model deployed with static weights. Note that in safety critical systems, where attacks like these would be most concerning, it is standard practice to validate a model and then freeze it, so as to ensure that the deployed model does not develop any new issues due to retraining. Therefore, a fixed victim is a realistic reflection of what we might see with RL-trained policies in real-world settings, such as with autonomous vehicles. Since the victim policy π ν is held fixed, the two-player Markov game M reduces to a single-player MDP M α = (S, A α, T α, R α) that the attacker must solve. The state and action space of the adversary are the same as in M, while the transition and reward function have the victim policy π ν embedded: where the victim's action is sampled from the stochastic policy a ν ∼ π ν (· | s). The goal of the attacker is to find an adversarial policy π α maximizing the sum of discounted rewards: Note the MDP's dynamics T α will be unknown even if the Markov game's dynamics T are known since the victim policy π ν is a black-box. Consequently, the attacker must solve an RL problem. We demonstrate the existence of adversarial policies in zero-sum simulated robotics games. First, we describe how the victim policies were trained and the environments they operate in. Subsequently, we provide details of our attack method in these environments, and describe several baselines. Finally, we present a quantitative and qualitative evaluation of the adversarial policies and baseline opponents. We attack victim policies for the zero-sum simulated robotics games created by Bansal et al. (2018a), illustrated in Figure 2. The victims were trained in pairs via self-play against random old versions of their opponent, for between 680 and 1360 million time steps. We use the pre-trained policy weights released in the "agent zoo" of Bansal et al. (2018b). In symmetric environments, the zoo agents are labeled ZooN where N is a random seed. In asymmetric environments, they are labeled ZooVN and ZooON representing the Victim and Opponent agents. All environments are two-player games in the MuJoCo robotics simulator. Both agents observe the position, velocity and contact forces of joints in their body, and the position of their opponent's joints. The episodes end when a win condition is triggered, or after a time limit, in which case the agents draw. We evaluate in all environments from Bansal et al. (2018a) except for Run to Goal, which we omit as the setup is identical to You Shall Not Pass except for the win condition. We describe the environments below, and specify the number of zoo agents and their type (MLP or LSTM): Sumo Ants (4, LSTM). The same task as Sumo Humans, but with'Ant' quadrupedal robot bodies. We use this task in Section 5.2 to investigate the importance of dimensionality to this attack method. Following the RL formulation in Section 3, we train an adversarial policy to maximize Equation 1 using Proximal Policy Optimization (PPO) . We give a sparse reward at the end of the episode, positive when the adversary wins the game and negative when it loses or ties. Bansal et al. (2018a) trained the victim policies using a similar reward, with an additional dense component at the start of training. We train for 20 million time steps using Stable Baselines's PPO implementation . The hyperparameters were selected through a combination of manual tuning and a random search of 100 samples; see Section A in the appendix for details. We compare our methods to three baselines: a policy Rand taking random actions; a lifeless policy Zero that exerts zero control; and all pre-trained policies Zoo * from Bansal et al. (2018a). We find the adversarial policies reliably win against most victim policies, and outperform the pre-trained Zoo baseline for a majority of environments and victims. We report Key: The solid line shows the median win rate for Adv across 5 random seeds, with the shaded region representing the minimum and maximum. The win rate is smoothed with a rolling average over 100, 000 timesteps. Baselines are shown as horizontal dashed lines. Agents Rand and Zero take random and zero actions respectively. The Zoo baseline is whichever ZooM (Sumo) or ZooOM (other environments) agent achieves the highest win rate. The victim is ZooN (Sumo) or ZooVN (other environments), where N is given in the title above each figure. the win rate over time against the median victim in each environment in Figure 3, with full in Figure 6 in the supplementary material. Win rates against all victims are summarized in Figure 4. Qualitative Evaluation The adversarial policies beat the victim not by performing the intended task (e.g. blocking a goal), but rather by exploiting weaknesses in the victim's policy. This effect is best seen by watching the videos at https://attackingrl.github.io/. In Kick and Defend and You Shall Not Pass, the adversarial policy never stands up. The adversary instead wins by taking actions that induce adversarial observations causing the victim's policy to take poor actions. A robust victim could easily win, a we demonstrate in Section 5.1. This flavor of attacks is impossible in Sumo Humans, since the adversarial policy immediately loses if it falls over. Faced with this control constraint, the adversarial policy learns a more high-level strategy: it kneels in the center in a stable position. Surprisingly, this is very effective against victim 1, which in 88% of cases falls over attempting to tackle the adversary. However, it proves less effective against victims 2 and 3, achieving only a 62% and 45% win rate, below Zoo baselines. We further explore the importance of the number of dimensions the adversary can safely manipulate in Section 5.2. Distribution Shift One might wonder if the adversarial policies are winning simply because they are outside the training distribution of the victim. To test this, we evaluate victims against two simple off-distribution baselines: a random policy Rand (green) and a lifeless policy Zero (red). These baselines win as often as 30% to 50% in Kick and Defend, but less than 1% of the time in Sumo and You Shall Not Pass. This is well below the performance of our adversarial policies. We conclude that most victim policies are robust to typical off-distribution observations. Although our adversarial policies do produce off-distribution observations, this is insufficient to explain their performance. In the previous section we demonstrated adversarial policies exist for victims in a range of competitive simulated robotics environments. In this section, we focus on understanding why these policies exist. Specifically, we establish that adversarial policies manipulate the victim through their body position; that victims are more vulnerable to adversarial policies in high-dimensional environments; and that activations of the victim's policy network differ substantially when playing an adversarial opponent. Adv is the best adversary trained against the victim, and Rand is a policy taking random actions. Zoo * N corresponds to ZooN (Sumo) or ZooON (otherwise). Zoo * 1T and Zoo * 1V are the train and validation datasets, drawn from Zoo1 (Sumo) or ZooO1 (otherwise). We have previously shown that adversarial policies are able to reliably win against victims. In this section, we demonstrate that they win by taking actions to induce natural observations that are adversarial to the victim, and not by physically interfering with the victim. To test this, we introduce a'masked' victim (labeled ZooMN or ZooMVN) that is the same as the normal victim ZooN or ZooVN, except the observation of the adversary's position is set to a static value corresponding to a typical initial position. We use the same adversarial policy against the normal and masked victim. One would expect it to be beneficial to be able to see your opponent. Indeed, the masked victims do worse than a normal victim when playing normal opponents. For example, Figure 4b shows that in You Shall Not Pass the normal opponent ZooO1 wins 78% of the time against the masked victim ZooMV1 but only 47% of the time against the normal victim ZooV1. However, the relationship is reversed when playing an adversary. The normal victim ZooV1 loses 86% of the time to adversary Adv1 whereas the masked victim ZooMV1 wins 99% of the time. This pattern is particularly clear in You Shall Not Pass, but the trend is similar in other environments. This is surprising as it implies highly non-transitive relationships may exist between policies even in games that seem to be transitive. A game is said to be transitive if policies can be ranked such that higher-ranked policies beat lower-ranked policies. Prima facie, the games in this paper seem transitive: professional human soccer players and sumo wrestlers can reliably beat amateurs. Despite this, there is a non-transitive relationship between adversarial policies, victims and masked victims. Consequently, we urge caution when using methods such as self-play that assume transitivity, and would recommend more general methods where practical . Our findings also suggest a trade-off in the size of the observation space. In benign environments, allowing more observation of the environment increases performance. However, this also makes the agent more vulnerable to adversaries. This is in contrast to an idealized Bayesian agent, where the value of information is always non-negative . In the following section, we investigate further the connection between vulnerability to attack and the size of the observation space. It is known that classifiers are more vulnerable to adversarial examples on high-dimensional inputs (b; ;). We hypothesize a similar for RL policies: the greater the dimensionality of the component P of the observation space under control of the adversary, the more vulnerable the victim is to attack. We test this hypothesis in the Sumo environment, varying whether the agents are Ants or Humanoids. The in Figures 4c and 4d support the hypothesis. The adversary has a much lower win-rate in the low-dimensional Sumo Ants (dim P = 15) environment than in the higher dimensional Sumo Humans (dim P = 24) environment, where P is the position of the adversary's joints. In Section 5.1 we showed that adversarial policies win by creating natural observations that are adversarial to the victim. In this section, we seek to better understand why these observations are adversarial. We record activations from each victim's policy network playing a range of opponents, and analyse these using a Gaussian Mixture Model (GMM) and a t-SNE representation. See Section B in the supplementary material for details of training and hyperparameters. We fit a GMM on activations Zoo * 1T collected playing against a normal opponent, Zoo1 or ZooV1, holding out Zoo * 1V for validation. Figure 5a shows that the adversarial policy Adv induces activations with the lowest log-likelihood, with random baseline Rand only slightly more probable. Normal opponents Zoo * 2 and Zoo * 3 induce activations with almost as high likelihood as the validation set Zoo * 1V, except in Sumo Humans where they are as unlikely as Rand. We plot a t-SNE visualization of the activations of Kick and Defend victim ZooV2 in Figure 5b. As expected from the density model , there is a clear separation between between Adv, Rand and the normal opponent ZooO2. Intriguingly, Adv induces activations more widely dispersed than the random policy Rand, which in turn are more widely dispersed than ZooO2. We report on the full set of victim policies in Figures 8 and 9 in the supplementary material. The ease with which policies can be attacked highlights the need for effective defences. A natural defence is to fine-tune the victim zoo policy against an adversary, which we term single training. We also investigate dual training, randomly picking either an adversary or a zoo policy at the start of each episode. The training procedure is otherwise the same as for adversaries, described in Section 4.2. We report on the win rates in You Shall Not Pass in Figure 4b. We find both the single ZooSV1 and dual ZooDV1 fine-tuned victims are robust to adversary Adv1, with the adversary win rate dropping from 87% to around 10%. However, ZooSV1 catastrophically forgots how to play against the normal opponent ZooO1. The dual fine-tuned victim ZooDV1 fares better, but still only wins 57% of the time against ZooO1, compared to 48% before fine-tuning. This suggests ZooV1 may use features that are helpful against a normal opponent but which are easily manipulable . Although the fine-tuned victims are robust to the original adversarial policy Adv1, they are still vulnerable to our attack method. New adversaries AdvS1 and AdvD1 trained against ZooSV1 and ZooDV1 win at equal or greater rates than before, and transfer successfully to the original victim. However, the new adversaries AdvS1 and AdvD1 are qualitatively different, tripping the victim up by lying prone on the ground, whereas Adv1 causes ZooV1 to fall without ever touching it. Contributions. Our paper makes three key contributions. First, we have proposed a novel threat model of natural adversarial observations produced by an adversarial policy taking actions in a shared environment. Second, we demonstrate that adversarial policies exist in a range of zero-sum simulated robotics games against state-of-the-art victims trained via self-play to be robust to adversaries. Third, we verify the adversarial policies win by confusing the victim, not by learning a generally strong policy. Specifically, we find the adversary induces highly off-distribution activations in the victim, and that victim performance increases when it is blind to the adversary's position. While it may at first appear unsurprising that a policy trained as an adversary against another RL policy would be able to exploit it, we believe that this observation is highly significant. The policies we have attacked were explicitly trained via self-play to be robust. Although it is known that self-play with deep RL may not converge, or converge only to a local rather than global Nash, self-play has been used with great success in a number of works focused on playing adversarial games directly against humans ). Our work shows that even apparently strong self-play policies can harbor serious but hard to find failure modes, demonstrating these theoretical limitations are practically relevant and highlighting the need for careful testing. Our attack provides some amount of testing by constructively lower-bounding the exploitability of a victim policy -its performance against its worst-case opponent -by training an adversary. Since the victim's win rate declines against our adversarial policy, we can confirm that the victim and its self-play opponent were not in a global Nash. Notably we expect our attack to succeed even for policies in a local Nash, as the adversary is trained starting from a random point that is likely outside the victim's attractive basin. Defence. We implemented a simple defence: fine-tuning the victim against the adversary. We find our attack can be successfully reapplied to beat this defence, suggesting adversarial policies are difficult to eliminate. However, the defence does appear to protect against attacks that rely on confusing the victim: the new adversarial policy is forced to instead trip the victim up. We therefore believe that scaling up this defence is a promising direction for future work. In particular, we envisage a variant of population-based training where new agents are continually added to the pool to promote diversity, and agents train against a fixed opponent for a prolonged period of time to avoid local equilibria. Table 1 gives the hyperparameters used for training. The number of environments was chosen for performance reasons after observing diminishing returns from using more than 8 parallel environments. The batch size, mini-batches, epochs per update, entropy coefficient and learning rate were tuned via a random search with 100 samples on two environments, Kick and Defend and Sumo Humans. The total time steps was chosen by inspection after observing diminishing returns to additional training. All other hyperparameters are the defaults in the PPO2 implementation in Stable Baselines . We repeated the hyperparameter sweep for fine-tuning victim policies for the defence experiments, but obtained similar . For simplicity, we therefore chose to use the same hyperparameters throughout. We used a mixture of in-house and cloud infrastructure to perform these experiments. It takes around 8 hours to train an adversary for a single victim using 4 cores of an Intel Xeon Platinum 8000 (Skylake) processor. We collect activations from all feed forward layers of the victim's policy network. This gives two 64-length vectors, which we concatenate into a single 128-dimension vector for analysis with a Gaussian Mixture Model and a t-SNE representation. We fit models with perplexity 5, 10, 20, 50, 75, 100, 250 and 1000. We chose 250 since qualitatively it produced the clearest visualization of data with a moderate number of distinct clusters. We fit models with 5, 10, 20, 40 and 80 components with a full (unrestricted) and diagonal covariance matrix. We used the Bayesian Information Criterion (BIC) and average log-likelihood on a heldout validation set as criteria for selecting hyperparameters. We found 20 components with a full covariance matrix achieved the lowest BIC and highest validation log-likelihood in the majority of environment-victim pairs, and was the runner-up in the remainder. Supplementary figures are provided on the subsequent pages. Figure 9: t-SNE activations of victim Zoo1 (Sumo) or ZooV1 (other environments). The are the same as in Figure 8 but decomposed into individual opponents for clarity. Model fitted with a perplexity of 250 to activations from 5000 timesteps against each opponent. Opponent Adv is the best adversary trained against the victim. Opponent Zoo is Zoo1 (Sumo) or ZooO1 (other environments). See Figure 8 for for other victims (one plot per victim).
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgEMpVFwB
Deep RL policies can be attacked by other agents taking actions so as to create natural observations that are adversarial.
GloVe and Skip-gram word embedding methods learn word vectors by decomposing a denoised matrix of word co-occurrences into a product of low-rank matrices. In this work, we propose an iterative algorithm for computing word vectors based on modeling word co-occurrence matrices with Generalized Low Rank Models. Our algorithm generalizes both Skip-gram and GloVe as well as giving rise to other embedding methods based on the specified co-occurrence matrix, distribution of co-occurences, and the number of iterations in the iterative algorithm. For example, using a Tweedie distribution with one iteration in GloVe and using a Multinomial distribution with full-convergence mode in Skip-gram. Experimental demonstrate that multiple iterations of our algorithm improves over the GloVe method on the Google word analogy similarity task. Word embeddings are low dimensional vector representations of words or phrases. They are applied to word analogy tasks and used as feature vectors in numerous tasks within natural language processing, computational linguistics, and machine learning. They are constructed by various methods which rely on the distributional hypothesis popularized by Firth: "words are characterized by the company they keep" BID9. Two seminal methodological approaches to finding word embeddings are Skip-gram [a] and GloVe []. Both methods input a corpus D, process it into a word co-occurence matrix X, then output word vectors with some dimension d. Skip-gram processes a corpus with w words into a count co-occurence matrix X ∈ R w×w, where x ij is the number of times word w i appears in the same context as the word w j. Here, two words being in the same context means that they're within l c tokens of each other. Define this co-occurence matrix to be the count co-occurence matrix. Next, Skip-gram [where u u u T i is the i th row of U, then defines the word vectors to be the rows ofÛ .GloVe processes a corpus with w words into a harmonic co-occurence matrix X ∈ R w×w where x ij is the harmonic sum of the number of tokens between words w i and w j over each co-occurrence. That is, x ij = p1<p2,|p1−p2|≤lc,D(p1)=wi,D(p2)=wj h(x ij) u u u DISPLAYFORM0 where a i and b j are bias terms, h(x ij) = (min{x ij, x max}).75 is the weight, and x max is some prespecified cutoff. GloVe then defines the estimated word vectors to be the rows of 1 2Û + 1 2V. In both Skip-gram and GloVe, a matrix of co-occurences X is introduced by processing the corpus, and an objective function is introduced to find a low rank factorization related to the co-occurences X. In this paper, we derive the objective functions from a model-based perspective. We introduce an iterative algorithm, and show that problem from running the iterative algorithm on full-convergence mode for a Multinomial model and problem is one step of the iterative algorithm for a Tweedie model. This algorithm additionally allows us to introduce methods to "fill in the gaps" between Skip-gram and GloVe and to introduce altogether new methods for finding word vectors. We saw that Skip-gram and GloVe compute a co-occurence matrix X which from processing the corpus D and an objective function J to relate the matrix X to a product of low rank matrices U and V. Many existing approaches for explaining word embedding methods do so by identifying or deriving the co-occurence matrix X or the objective function J. In this section, we review relevant work in this area, which helps frame our approach discussed in Section 4.1.Much of the related work involves using the co-occurence matrix from Skip-gram. For the remainder of this section, let X be the count co-occurence matrix. Early approaches to finding low-dimensional embeddings of words relied on the singular value decomposition [,]. These methods would truncate the singular value decomposition by zeroing out the small singular values. BID7 show that this is equivalent to using an objective function J which is invariant to orthogonal transformation. For simplicity, we specialize to the Frobenius norm and say these early approaches find arg min DISPLAYFORM0 F is the objective function and X is the co-occurence matrix. The co-occurence matrix and the loss function for Skip-gram can be read off from problem: the co-occurence matrix is X and the objective function is written in problem with u u u T i v v v j replaced by m ij. BID4 find a probabilistic interpretation of this loss function related to a Multinomial distribution, but do not take advantage of it and only replace the inner product with a (higher dimensional) variant, somewhat similar to the approach in Tifrea et al.. Mikolov et al. [2013a] introduce Skip-gram with negative sampling (SGNS), a variant of Skip-gram. If we view Skip-gram as maximizing the true positive rate of predicting a word will appear within a context window of another word, we can view SNGS as maximizing the true positive rate plus k times an approximation of the true negative rate. When k = 0, Skip-gram and SGNS coincide. BID19 use a heuristic argument to interpret SGNS as using a co-occurence matrix that is a shifted PMI matrix.2 However, they did not determine the objective function. Later, Li et al. and Landgraf and Bellay explicitly identified both the co-occurence matrix and the objective function. They find a different co-occurence matrix than BID19, one that does not depend on k, while their loss function does depend on k. Surprisingly, they establish that SGNS is finding a low-rank matrix related to X, the same matrix that Skip-gram uses. The loss function is w,w i,j=1 DISPLAYFORM1 2 Define the total number of times word wi appears to be xi· = w j=1 xij, the total number of times context wj appears to be x·j = w i=1 xij, and the total number of words to be x·· = w,w i,j=1 xij. The shifted PMI matrix has entries log DISPLAYFORM2 Landgraf and Bellay explain that this loss function has a probabilistic interpretation, and they use that interpretation to recover the shifted PMI matrix as a prediction from within their model. The approach in this paper will be to view the entries of the co-occurence matrix as random variables and introduce an objective function via the likelihood of that random variable. Our approach is most similar to Landgraf and Bellay and, to a lesser extent, BID4. In order proceed, some in probabilistic modeling and estimation needs to be developed. In this section, we review iteratively reweighted least squares (IRLS) for generalized linear models and review generalized low rank models []. Further (and notation) in exponential dispersion families and generalized linear models is developed in Section A. Generalized linear models (GLMs) are a flexible generalization of linear regression where the mean is a not necessarily linear function of a coefficient β β β and the response has an error distribution which is an exponential dispersion family. The coefficient β β β is unknown and a target of estimation. The standard approach to estimate β β β is maximum likelihood estimation [, Section 7] to produce the maximum likelihood estimator, or MLE,β β β. A computational approach to find the MLE is through Fisher scoring, a variant of Newton's method on the log likelihood which uses the expectation of the Hessian in place of the Hessian [, Section 4.5]. Define (β β β) to be the log likelihood. Specifically, Fisher scoring produces a sequence of estimates {β β β DISPLAYFORM0), where ∇ is the gradient and D 2 is the Hessian. Upon plugging in the gradient and expected Hessian for an exponential dispersion family, a surprising identity emerges: each iteration of Fisher scoring is equivalent to minimizing a weighted least squares objective: DISPLAYFORM1 where the weight H (t) and pseudo-response z (t) at iteration t have DISPLAYFORM2 DISPLAYFORM3, and µ DISPLAYFORM4 Principal components analysis BID13 is one well-known method for finding a low rank matrix related to X ∈ R w×c. In principal components analysis, we model x ij ind. DISPLAYFORM0 A maximum likelihood estimator for u u u i is taken to be a low-dimensional embedding of the i th row of X. The low-dimensional embedding enables interpretability and reduces noise. However, data cannot always be viewed as being drawn from a normal distribution, so it's necessary to extend the method of principal components to non-normal data. The extension can be made in a manner similar to the extension from linear models to generalized linear models: the new model, called a generalized low rank model [] allows us to estimate model-based low-dimensional embeddings of non-normal data. Definition 1 For some exponential dispersion family ED(µ, ϕ) with mean parameter µ and dispersion parameter ϕ, the model for X ∈ R w×c is a generalized low rank model with link function g when DISPLAYFORM1 DISPLAYFORM2 where u u u i, v v v j ∈ R d are the rows of matrices U ∈ R w×d and V ∈ R c×d, respectively, and a a a ∈ R w and b b b ∈ R c are bias (or offset) terms. The difference between the generalized low rank model and the generalized linear model is in the systematic component in equation FORMULA12. Here, the data is modeled as having its link-transformed mean be a matrix with rank at most d. This formalizes the way in which we relate the co-occurence matrix X to a low rank factorization. When the link function g is taken to be canonical, the generalized low rank model is identical to ePCA BID3. The generalization is worthwhile since the canonical link can be inappropriate, as we will see, for instance, in Section 5.1. We now present a method to find word vectors. A key innovation in the method is an iterative algorithm inspired by IRLS to find a maximum likelihood estimator in a generalized low rank model. Our method has three steps:Step 1 Choose a co-occurence matrix X ∈ R w×c to summarize the document. (Note, in many cases c = w so that the "contexts" are just the words.)Step 2 Choose a plausible exponential dispersion family to model the entries of the co-occurence matrix. Choose a corresponding link function. Step 3 Choose a number of iterations r to run IWLRLS (Algorithm) with the input specified above to output word vectors. DISPLAYFORM0; Evaluate the least squares problem arg min DISPLAYFORM1 Algorithm 1: Iteratively weighted low rank least squares (IWLRLS) algorithm for GLRMsThe first step of our method processes the corpus in order to extract the linguistic information. Some co-occurence statistics use more information than others: for instance, the harmonic co-occurence matrix makes use of the number of tokens between words while the count co-occurence matrix does not. A typical tuning parameters here is the length l c of the context window. We view this step as involving a "linguistic" choice. The second step specifies a distribution for the co-occurence matrix. A distribution can be considered as plausibly corresponding to reality if it can be derived by a connection to the corpus. In our Co-occurence: Harmonic Table 1: The rows refers to the number of steps of IWLRLS. A "·" represents no existing work. All filled-in positions in the lowest row were established in previous work. framework, the model is explicit: this is helpful since knowing a model provides interpretation for its output [, Section II.A.]. The choice of distribution will often determine, through convention, the link function, so the link function often does not need to be separately chosen. We view this step as involving a "statistical" choice. DISPLAYFORM2 The third step runs IWLRLS, a generalized version of IRLS. Recall that IRLS is derived by iteratively maximizing a second order Taylor expansion of the likelihood as a function β. The Taylor expansion is centered at the previous iterate. IWLRLS can be derived by iteratively maximizing a second order Taylor expansion of the likelhood as a function of η subject to the constraint 6. We view this as a "computational" choice that we fix in advance. In the following subsections, we run through many examples of our method as it would be used in practice. There are two distinct choices of co-occurence matrices that are made. Various choices of distributions recover common methods for finding word vectors. An altogether new estimator is proposed via an improvement of the assumed distribution in Skip-gram. Casting these estimators in this general framework provides an interpretation and understanding of them: we make explicit their assumptions and therefore know the driver of their behavior. We will apply our proposed method under the choice of the harmonic co-occurence matrix and the Tweedie distribution: one iteration of IWLRLS will recover GloVe. Step 1 The first step of our method is to pick a co-occurence matrix that summarizes the corpus. We choose the harmonic co-occurence matrix X ∈ R w×w.Step 2 Now we must determine a plausible distribution for the co-occurence matrix that is an exponential dispersion family. Recall that the Tweedie distribution has the property mentioned in equation FORMULA0 that it is a sum of Poisson many independent Gamma distributions. An informal way to write this is that DISPLAYFORM0 We argue that the Tweedie distribution is reasonable by connecting the Poisson and Inverse Gamma distributions displayed above to attributes of the corpus. Intuitively, it is reasonable that the number of times word w i and word w j co-occur within the corpus can be modeled as having a Poisson distribution. Another choice of distribution is that of an Inverse Gamma distribution for the number of tokens between word w i and word w j at some co-occurence, although it is an approximation as the number of tokens is an integer while the Inverse Gamma is supported on non-integers. Instead of using the canonical link function, we will take g(µ) = log µ, which is standard []. A problem with the canonical link function preventing its use is that its range is nonpositive. Step 3 Next, we find the form of the weight H and the pseudo-response Z that the Tweedie distribution provides. This amounts to plugging in the cumulant generating function ψ that is given in Section A.1. This in DISPLAYFORM1 When the algorithm is initialized withμ = X, the pseudo-response simplifies to z ij = log x ij.Taking the power p = 1.25, the weight simplifies to x 3/4 ij. In summary, we've shown that: Result 1 Inputting the harmonic co-occurence matrix, the Tweedie distribution with power p = 1.25, the log link, and the number of iterations k = 1 into IWLRLS in GloVe (without the algorithmic regularization induced by truncating the weights.)Given this connection, we can extend GloVe for several iterations rather than one or even use the full likelihood. We experiment with this using real data examples in Section 6. This shows that even though the first iteration does not depend on word pairs where x ij = 0, later iterations do. We now consider an alternative first step: we choose another co-occurence matrix to summarize the corpus. Then, we make multiple possible choices for step 2 to illustrate connections to previous work that step 3 recovers. Various choices for step 2 will recover the SVD BID17 ], Skip-gram [a], a new estimator which is a distributional improvement over those, and Skip-gram with negative sampling [b].Step 1 We choose the count co-occurence matrix. Step 2 A proposed distribution for the entries of X is the Gaussian distribution. This may not be the best choice, since the entries of X are non-negative integers. As is usual, we take the link function to be g(µ) = µ. We restrict the systematic component to not include the bias terms, so that η ij = u u u DISPLAYFORM0 Step 3 We showed in Section A.1 that the cumulant generation function from the normal distribution is ψ(θ) = 1 2 θ 2. This makes it so that DISPLAYFORM1 In other words, the IWLRLS algorithm will always converge in one iteration, so our method recovers the method of computing a truncated SVD of X by BID7.Another choice that could have been made in step 2 is to have the link function g(µ) = log µ. This still may not be the best choice since the normal distribution still has the same problems as before. Step 2 Another proposed distribution for the entries of X is a Multinomial distribution. Specifically, we could propose that the the row of X corresponding to word w i has the distribution x x x i ∼ Multinomial w j=1 x ij, π π π, where π π π ∈ R w is vector of probabilities of word w i appearing within a context window with the other words and w j=1 x ij is the total number of times word w i appears in the corpus. We take the link function to be the multi-logit. 3 Cotterell et al. show that the objective function of Skip-gram coincides with the likelihood of this model when the bias terms are removed, so that the systematic component η ij = u u u Step 3 The Poisson trick BID2 can be used to reduce estimation in a Multinomial model to estimation in a particular Poisson model. LetÛ,V be the maximum likelihood estimators in the Multinomial generalized low rank model described in step 2. Using this trick, it holds thatâ a a, (the same)Û, and (the same)V are maximum likelihood estimators in a Poisson generalized low rank model with independent responses x ij and systematic component DISPLAYFORM0 Notice that there is only one bias term. The weight and pseudo-response are DISPLAYFORM1 In the previous subsubsection, we saw that the choice of Multinomial model implicitly gives rise to a Poisson model with a systematic component given by equation FORMULA20. Since it could be most appropriate to have bias terms for both rows and columns due to the symmetry of the co-occurence matrix, we directly introduce a Poisson estimator with a non-restricted systematic component. Step 2 Another proposed distribution is a Poisson. Due to the "law of rare events" [, Section 3.6 .1], this is a plausible model. We use the canonical link function g(µ) = log µ.Step 3 The cumulant generating function is ψ(θ) = exp(θ) BID0, so that the weight and pseudo-response are given by equations. BID1 propose an estimator which is a close variant of one iteration of IWLRLS. At one point in their derivation, they (using our notation) take η ij = u u u i − v v v j 2 2 + c, where c is an arbitrary constant which does not depend on the word. This is inspired by their theorem 2.2. On the other hand, taking η ij = u u u T i v v v j + a i + b j (as in equation 6) in their derivation recovers one iteration of IWLRLS. The Negative-Binomial distribution is commonly used as an alternative for the Poisson in the presence of over-dispersion, which is the case when the variance is higher than the mean. It produces the same weight and pseudo-response as the Poisson. Step 2 We model x ij ind. DISPLAYFORM0 is an inflated count, η ij = u u u T i v v v j, and k ≥ 0. Landgraf and Bellay showed that a maximum likelihood estimator from this model with canonical link g(π) = log π 1−π is identical to a SGNS estimator. Step 3 The cumulant generating function for the binomial distribution is ψ(θ) = log(1 + exp θ), so the weight and pseudo-response are: DISPLAYFORM1 In Section 4.1 we introduced the IWLRLS algorithm to compute word vectors such as those produced by GloVe or SGNS. We now conduct quantitative evaluation experiments on an English word analogy task, a variety of word similarity tasks [a] to demonstrate the performance of the algorithm. First, in Section 6.1 we introduce the analogy similarity task for evaluating word vectors. In Section 6.2 we present of the algorithm with different distributions according to those presented in Section 5.1 and 5.2. In Section B.1 we provide parameter configurations and training procedures, and in Sections B.2-B.5 we present of IWLRLS in numerous scenarios showcasing improvement through multiple iterations and robustness to other model parameters. We introduce the word analogy task following the presentation of []. The word analogy task is a dataset of 19, 544 statements of the basic form "a is to b as c is to __", which are divided into a semantic and syntactic subsets. The semantic statements are typically analogies relating to people, places, or nouns such as "Athens is to Greece as Berlin is to __", while the syntactic questions relate to verb or adjective forms such as "dance is to dancing as fly is to __". The basic analogy statement is answered by finding the closest vector u u u d to u u u b − u u u a + u u u c 4 in the embedding space via cosine similarity 5. The task has been shown under specific assumptions to be provably solvable by methods such as GloVe and Skip-gram BID8 BID12 ] and as such is closely related to solving the objectives introduced in Sections 1 and 4.1. In this section, of the IWLRLS algorithm are performed for the Tweedie, Multinomial, and Poisson models. Based on the additional experiments in Sections B.2-B.6 we train the Tweedie model with p = 1.25 (Section B.3) and for all models include weight truncation to penalize large co-occurrences (Section B.4), regularization terms (outlined in Section B.5), and include only a single bias term within the systematic component of the Tweedie model (Section B.6).Step To demonstrate the effectiveness of performing multiple iterations of the IWLRLS algorithm, we present for the one-step estimator, an early-stopped estimator, and the full-likelihood estimator. Of particular interest in our are the Tweedie one-step estimator (a variant of the GloVe method), and the full-likelihood estimator for the Multinomial (a variant of the Skip-gram method). For the in TAB1, the full-likelihood is taken to be the iteration which achieves the maximum total accuracy on the analogy task, and the early-stop algorithm is taken to be an iteration between the one-step and full-likelihood iterations which performs best in total accuracy on the analogy task. For both the Tweedie and Multinomials, the full-likelihood is the after 3 iterations and the early-stopped is the after 2 iterations. For the Poisson model, the full-likelihood is the after 9 iterations, and the early-stopped is the after 3 iterations. We find a small difference in total accuracy on the analogy task with the one-step estimator (GloVe) and the full-likelihood differing by roughly 1%. We find a similar relationship in the Poisson estimator and further note that the early-stopped estimator for the Poisson has very similar accuracy to the full-likelihood algorithm. Finally, the Multinomial model yields a difference of 2% between the full-likelihood algorithm (Skip-gram) and the one-step algorithm. The early-stopped algorithm for the Multinomial also performs 1% higher than the one-step algorithm indicating a fair tradeoff between running an additional iteration and stopping after only one iteration. We present a general model-based methodology for finding word vectors from a corpus. This methodology involves choosing the distribution of a chosen co-occurrence matrix to be an exponential dispersion family and choosing the number of iterations to run our algorithm. In Table 1, we see that our methodology unifies the dominant word embedding methods available in the literature and provides new and improved methods. We introduce an extension of Skip-gram that is stopped before full-convergence analagously to GloVe and an extension to GloVe beyond one iteration. Experimental on a small corpus demonstrate our method improves upon GloVe and Skip-gram on the Google word analogy similarity task. It is our hope that this methodology can lead to the development of better, more statistically sound, word embeddings and consequently improve on many other downstream tasks. Further in exponential dispersion families and generalized linear models is developed here. We begin by discussing exponential dispersion families, the distribution of the response in generalized linear models. Definition 2 Let y ∈ R be a random variable. If the density function f (y; θ, ϕ) of y satisfies DISPLAYFORM0 over its support, then the distribution of y is in the exponential dispersion family. The parameter θ is the natural parameter, ϕ is the dispersion parameter, and the function ψ is the cumulant generating function. In many cases, the function δ(ϕ) is very simple, meaning that, for instance, δ(ϕ) = 1 or δ(ϕ) = ϕ.The function c(y; ϕ) can be viewed as the normalizing constant ensuring that the density integrates to one. When y follows a distribution in the exponential dispersion family with natural parameter θ, its mean µ = ψ (θ), so we can equivalently specify the mean µ or the natural parameter θ. Many classical distributions such as the Poisson, Normal, Binomial, and Gamma distribution are exponential dispersion families. For example, when y ∼ Normal(µ, σ 2) is a normal distribution with mean µ and variance σ 2, its log density satisfies DISPLAYFORM1 showing that here the natural parameter θ = µ, the dispersion parameter ϕ = σ 2, the functions ψ(θ) = 1 2 θ 2, δ(ϕ) = ϕ, and c(y; ϕ) = 1 2 log(2πσ 2) + y 2 2σ 2. The Tweedie distribution BID15, of particular importantance to us, also lies within the exponential dispersion family. Instead of defining the Tweedie distribution through the form of its density, we will define it through the relationship between its mean and variance. This relies on a from [Jørgensen, 1987, Theorem 1] that distributions within the exponential dispersion family are defined by the relationship between their mean and variance. Definition 3 A random variable y has a Tweedie distribution with power parameter p ∈ {0} ∪ [1, ∞) when DISPLAYFORM2 and the distribution of y is an exponential dispersion family. In this case, we write y ∼ Tweedie p (µ, ϕ), where µ = E(y) is the mean. The Normal distribution discussed above has a variance that does not depend on the mean. In our new notation, this means that the Normal distribution is a Tweedie distribution with power parameter p = 0. The Poisson distribution has variance equal to the mean and is in the exponential dispersion family, so is a Tweedie distribution with power parameter p = 1 and dispersion parameter ϕ = 1. A Gamma distribution with shape parameter α and rate parameter β is a Tweedie distribution with power p = 2, mean µ = α β, and dispersion parameter ϕ = α −1.We will only consider Tweedie distributions with power parameter p ∈. These distributions are also known as compound Poisson-Gamma distributions due to the representation DISPLAYFORM3 where n ∼ Poisson(λ) and g i DISPLAYFORM4 ∼ Gamma(α, β), and λ = BID15. It is important to note that the Tweedie distribution has positive mass at zero, an important characteristic for capturing the zero-inflation prominent in some co-occurence matrices due to some words never appearing within the same context. Specifically, DISPLAYFORM5 Using other arguments related to representations of the mean and variance in terms of the cumulant generating function ψ, BID15 show that the Tweedie distribution has ψ(θ) DISPLAYFORM6 Exponential dispersion families are defined over real numbers. Now, we generalize their definition to a multivariate setting. Definition 4 Let y y y ∈ R m be a random vector. If the density function f (y y y; θ θ θ, ϕ) of y y y satisfies log f (y y y; θ θ θ, ϕ) = y y y DISPLAYFORM0 over its support, then the distribution of y y y is in the multivariate exponential dispersion family. The parameter θ θ θ ∈ R m is the natural parameter, ϕ ∈ R is the dispersion parameter, and the function ψ: R m → R is the cumulant generating function. A collection of independent draws from the same exponential dispersion family is a multivariate exponential dispersion family. To see this, let y i (i = 1, . . ., m) be i.i.d. from an exponential dispersion family. Then, the density of y y y satisfies log f (y y y; θ θ θ, ϕ) = DISPLAYFORM1 + c(y j ; ϕ), which has cumulant generation function ψ(θ θ θ) = m j=1 ψ(θ j). Another useful example of a multivariate exponential dispersion family is the Multinomial. Let x x x ∈ R c have be distributed as x x x ∼ multinomial(s, π π π), where s ∈ N is the total number of draws and π π π ∈ R c is the probability vector. Introduce a change of parameters where DISPLAYFORM2 showing that the multinomial distribution is in the multivariate exponential dispersion family with ψ(θ θ θ) = s log (c k=1 exp θ k). We start by reviewing the linear model. Given a response y y y ∈ R n comprising n observations, the model for y y y is a linear model with covariates x x x i ∈ R p when y i ind.∼ Normal(x x x T i β β β, σ 2) for all i ∈ {1, . . ., n}. In vector notation, this reads that y y y ∼ Normal(Xβ β β, σ 2 I), where X ∈ R n×p is a matrix with i th row x x x T i. This is one of the more primitive models in the statistical modeling toolbox and isn't always appropriate for the data. Generalized linear models remove the the assumptions of normality and that the mean is a linear function of the coefficients β β β. Definition 5 For some exponential dispersion family ED(µ, ϕ) with mean parameter µ and dispersion parameter ϕ, the model for y y y ∈ R n is a generalized linear model with link function g when DISPLAYFORM0 DISPLAYFORM1 In the first line of the displayed relationships, the distribution of the response y y y is described. In the third line, the systematic component η i expresses the effect of the covariates x x x i. The second line connects the distribution to the covariates through the link function. That is, the covariates effect a link-transformed mean. The canonical link (b) −1 is often chosen as the link function, due to its computational and mathematical properties BID0. Other times, the canonical link is inappropriate and there are alternative default choices. Generalized linear models are used as the default modeling framework in many fields of applied science for non-normal distributions []. When g(µ) = µ is the identity map and ED is the Normal distribution, the generalized linear model is simply the linear model. When g(µ) = logit(µ) = log 1−µ µ and ED is the Binomial distribution, the generalized linear model is logistic regression. Further, a generalized linear model can be viewed as a no-hidden-layer neural network with activation function g. We train our models on the English text8 corpus 6 with approximately 17 million tokens. We filter out word types that occur fewer than 50 times to obtain a vocabulary size of approximately 11, 000; a ratio consistent with other embedding literature experiments 7.The adjustable model configurations in IWLRLS are the choice of power parameter p, penalty tuning parameter λ, and co-occurrence processing step. We experiment with different choices of p ∈ {1.1, 1.25, 1.5, 1.75, 1.9}, different choices of processing including no processing, clamping the weights (as in GloVe) and truncating the outliers in the co-occurrence matrix (elaborated on in Section B.4, and set the penalty tuning parameter λ = 0.002. The estimated word vectors are the rows of DISPLAYFORM0 . For all of our experiments, we set the dimension of the word vectors to d = 150, and the objective function at each iteration is optimized using Adagrad BID5 with a fixed learning rate of 0.1 8 . Models are trained for up to 50 epochs (50 passes through the co-occurrence matrix) with batches of size 512. We evaluate the impact of multiple iterations of the IWLRLS algorithm on all models, but examine different additions to the model only when p = 1.25. We believe the impact of these changes will be present however for any value of p. We present of multiple iterations of our IWLRLS algorithm with different distributions. In particular, we perform multiple iterations of the IWLRLS algorithm with Tweedie distribution and weight truncation to match the GloVe objective function and processing by setting the weight function in our model from h(x) = x 2−p to h(x) = (min{x, x max}).75 with x max = 10 and p = 1.25. We also presents for an early-stopped version of skip-gram, and the new Poisson estimator. The are summarized in FIG4. We remark on a few observations based on these . First, as the number of steps increases, the accuracy on the analogy task increases for the first few iterations. Second, relatively few steps are needed with the accuracy of Tweedie model performing best at the first and second steps of the algorithm, and the Multinomial and model performing best in steps 3-5 but with very similar performance at earlier steps. The Poisson model performs best after 9 iteration, however performs nearly identically to the of an early stopped algorithm at 3 iterations. In , we find that early-stopped and one-step versions of the algorithm can perform comparably to full-likelihood methods. In this section, we examine the effect of the choice of the power p in the tuning parameter when you run a Tweedie generalized low rank model. Table 3: Results for multiple choices of p for one and two iterations. The Results in Table 3 show that values of p which are high perform poorly, while values of p below 1.5 perform similarly. We find that p = 1.25 performs the best, and view this value of p as a good choice as it accounts for zero-inflation present in the co-occurence X. This also agrees with the of [] and BID16 ].An even more interesting and perhaps more appropriate way to estimate the power p of the Tweedie distribution is in a data-driven and model-based way. This approach is taken in BID16. In future work, we plan to use an improved estimating equation relative to BID16 ] to estimate p as part of the algorithm. This would be modeling the marginal distribution of the co-occurences as being Tweedie with the same power. Under a similar assumption, modified likelihood calculations are tractable and so are another possibility. We plan to explore this in future work. We set p = 1.25 in our algorithm with Tweedie distribution, and explore the effect of different strategies in handling large entries in the co-occurrence matrix X. One strategy is to simply input X into step 3 of our method. A second strategy is to clamp the weight h(·) that from step 3 of our method by taking h(x) = (min{x, x max}).75 as in GloVe. A third strategy is to input min{x, x max} for each entry of the matrix X, where x max = 10, into step 3 of our method 9.We find that no adjustment to the weights and GloVe's method of weight truncation both perform similarly with weight truncation slightly outperforming no adjustment. We suspect a more significant improvement will show with larger corpora such as a full Wikipedia corpus. Alternative approaches to alleviating the problem of large co-occurences are to use a more robust distribution or link function. Indeed, the weight truncation in GloVe can be directly mimicked by Table 4: Results for multiple choices of regularizing the large values of the co-occurrence matrix. Our strategies are harmonic matrix, truncation of the weight only, truncation of the co-occurrence matrix to x max = 10.either altering the distribution or the link function. The desired form can be found via the weight and pseudo-response equations in algorithm 1. We leave this to future work. Table 5: Results for including the penalty term in Equation FORMULA0 and not including the diagonal terms. We consider regularizing the word vectors by including the penalty DISPLAYFORM0 with λ =.002 for two reasons. One is to reduce noise in the estimation of the word vectors. Udell et al. [2016, Lemma 7.3] show that penalizing by is equivalent to penalizing by λ U V T *, the nuclear norm of U V T. Since penalizing the nuclear norm U V T shrinks the dimension of the embedding and larger dimensional embeddings tend to be better [], we choose a small tuning parameter to reduce noise while still preserving the dimension. Another reason is to symmetrically distribute the singular values ofÛV T to both matricesÛ andV.Write the singular value decompositionÛV T = U ΣV T, for U and V orthogonal and Σ diagonal. Mu et al. [2018, Theorem 1] shows that using penalty in havingÛ = U Σ 1/2 Q and V = V Σ 1/2 Q for some orthogonal matrix Q. This is desirable since it was argued empirically by Levy et al. that a symmetric distribution of singular values works optimally on semantic tasks. Finally, we experiment with whether the penalty introduced in Equation FORMULA0 improves and accurately reduces noise in the estimate. We also consider not including the diagonal elements of X as a form of regularization and experiment here as well, as these terms are often large (can be considered as outliers) and do not contain a great deal of linguistic information. Table 5 demonstrates the included regularization within the IWLRLS algorithm with Tweedie distribution and p = 1.25 improves . In Experiment 1, we found that the Multinomial model outperforms the Poisson model, although the Poisson model has an additional bias term to model context word frequencies. This was fairly counterintuitive, so we additionally experiment with having only a single bias term a i in the Tweedie model as in the Multinomial model. We find overall that the Tweedie model with a systematic component without the bias term b j performs slightly better than the Tweedie model with systematic component containing both bias terms a i and b j. We hope to further study the impact of bias terms and other systematic components in future work. Table 6: Results for including the bias term on the context word b j in addition to a i.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgjYyaio7
We present a novel iterative algorithm based on generalized low rank models for computing and interpreting word embedding models.
Deterministic models are approximations of reality that are often easier to build and interpret than stochastic alternatives. Unfortunately, as nature is capricious, observational data can never be fully explained by deterministic models in practice. Observation and process noise need to be added to adapt deterministic models to behave stochastically, such that they are capable of explaining and extrapolating from noisy data. Adding process noise to deterministic simulators can induce a failure in the simulator ing in no return value for certain inputs -- a property we describe as ``brittle.'' We investigate and address the wasted computation that arises from these failures, and the effect of such failures on downstream inference tasks. We show that performing inference in this space can be viewed as rejection sampling, and train a conditional normalizing flow as a proposal over noise values such that there is a low probability that the simulator crashes, increasing computational efficiency and inference fidelity for a fixed sample budget when used as the proposal in an approximate inference algorithm. In order to compensate for epistemic uncertainty due to modelling approximations and unmodeled aleatoric uncertainty, deterministic simulators are often "converted" to "stochastic" simulators by randomly perturbing the state at each time step. In practice, models adapted in this way often provide better inferences (Møller et al., 2011; ; ; ;). State-independent white noise with heuristically tuned variance is often used to perturb the state (; ; ; ; ; ;). However, naively adding noise to the state will, in many applications, render the perturbed input state "invalid," inducing failure (; ;). These failures waste computational resources and reduce sample diversity, worsening inference performance. Examples of failure modes include ordinary differential equation (ODE) solvers not converging to the required tolerance in the allocated time, or, the state crossing into an unhandled configuration, such as solid bodies overlapping. Establishing the cause of failure is non-trivial and hence, the simulation artifact can be sensitive to seemingly inconsequential alterations to the state -a property we describe as "brittle." The principal contribution of this paper is a technique for minimizing this failure rate. We proceed by first framing sampling from brittle simulators as rejection sampling. We then eliminate rejections by learning the state-dependent density over perturbations that do not induce failure, using conditional autoregressive flows . Doing so renders the joint distribution unchanged and retains the interpretability afforded by the simulator, but improves sample efficiency. We show that using the learned proposal increases the fidelity of the inference attainable on a range of examples. We denote the brittle deterministic simulator as f: X → {X, ⊥}, X = R D, where a return value of ⊥ denotes failure. Over the whole support, f defines a many-to-one function, as many states map to ⊥, however we only require that f is one-to-one in the accepted region, a condition satisfied by ODE models. A stochastic, additive perturbation to state, denoted z t ∈ X, is proposed such that x t ← f (x t−1 + z t), z t ∼ p(·|x t−1), although this is often state independent. We include more detailed derivations in Supplementary Materials Section B. The naive approach to iterate the perturbed system is to repeatedly sample from the proposal distribution and evaluate f until the simulator successfully exists. We begin by showing that this process defines a rejection sampler. The use of f and p(z t |·) implicitly specifies a distribution over successfully iterated states, p(x t |x t−1); and consequently a second distribution over accepted perturbations, denoted p(z t |x t−1), which, under the process outlined above, can be written as: p(z t |x t−1) = 1 Mp p(z t |x t−1), if f (x t−1 + z t) = ⊥ 0, otherwise where the normalizing constant M p is the acceptance rate under p. In regions that fail the sample is rejected with certainty. In the accepted region,p ∝ p, which is a sufficient condition for a rejection sampler to be valid, without needing to evaluate M p orp. This represents a rejection sampler with an acceptance rule of I [f (x t−1 + z t) = ⊥] targetinḡ p(x t |x t−1). We now seek to learn a proposal distribution over z t values conditioned on the current state x t−1, denoted q φ, parameterized by φ, to replace p but placing no mass on regions that are rejected, ing in an acceptance rate tending to unity. We denote q φ as the proposal we train, which, coupled with the simulator, implicitly defines a proposal over accepted samples, denoted q φ. We wish to minimize the distance between joint distribution implicitly specified over accepted iterated states using p, f and q φ, amortized across state space : Expanding the Kullback-Leibler divergence (D KL), applying a change of variables yields, and noting that the Jacobian terms can be cancelled yields: However, q φ is defined implicitly after rejection sampling, and so we can adapt for q φ and substitute back into. Differentiation of the acceptance rate (M q φ) is intractable. However, noting that φ * is a maximizer for both q φ andq φ, we can instead optimize q φ (z t |x t−1): By removing the rejection sampler as we have, we have implicitly specified that the proposal distribution must have an acceptance rate of one. This term is differentiable with respect to φ and so we can maximize this quantity using stochastic gradients. Importantly the flow is not explicitly trained to maximize the acceptance rate. The flow is trained to minimize the KL divergence between the implicitly specified distribution over accepted samples and the learned proposal distribution. Accordingly the flow retains the shape of the proposal distribution in regions of state space that do not yield failure (this can be seen by comparing red and green contours in the interior of the dashed lines in Figure 1a) and hence the learned distribution cannot collapse to add trivially small perturbations, as would be the case if we had directly optimized for high acceptance rates. By exploiting change of variables we are able to "project" back through the rejection sampling procedure and hence we can optimize q φ as we do not need to compute the derivative of the rejection rate as we would have otherwise needed to do. Finally, we note that generation of data training data and learning of the autoregressive flow is a computationally intensive procedure. However simulators can take on the order of seconds to iterate and so the intention of this work is to create a technique that maximizes computational efficiency when deployed. Sampling from the flow takes on the order of milliseconds, can be accelerated using GPUs and scale favourably in the number of samples being produced. Furthermore, the training procedure is performed once and hence represents an offline, one-off cost exchanged for higher efficiency deployment. The training data can also be generated using large-scale distributed computing (as the mini-batching process is inherently embarrassingly parallelizable) that may not be available or practical for use at deployment time. Implementation We use a conditional masked autoregressive flow as the structure of q φ, with 5 single-layer MADE blocks , 256 hidden units per layer and batch normalization layers at the input to each intermediate MADE block. The dimensionality of the flow is the number of states perturbed in the original model. To introduce conditioning we use the current state vector, x t−1, as input to a hypernetwork that outputs the parameters for each layer in the flow. The networks are implemented in PyTorch , and optimized using ADAM . We demonstrate on two examples here, and an additional two experiments in the appendix. In these experiments we first aim to demonstrate that learning the required conditional autoregressive flow is tractable and faithfully represents the conditional distribution over accepted perturbations. We then use the learned proposal in a particle-based sequential (c) Figure 1: Results for the annulus problem introduced in Section 3.1. 1a indicates the permissible region as a black dashed band, where p and q φ is shown in red and green respectively. The shape of q φ is the same as p inside the band, with little mass outside of the band. This shows the flow has learnedp effectively with a low rejection rate. 1b confirms q φ all-but eliminates rejection. 1c shows the reduction in the variance of the evidence across 100 independent sequential Monte Carlo sweeps of 100 independent datasets. Monte Carlo state-space inference scheme and show that lower-variance inference can be obtained for a fixed sample budget. In this example, the (unknown) true generative model of the observed data is a constant speed circular orbit around the origin in the x-y plane. We perform inference using a misspecified model that only simulates constant velocity forward motion, such that x t ∈ R 4, with Gaussian perturbations to position and velocity. We impose a failure constraint limiting the change in the distance of the point from the origin to a fixed threshold. This condition mirrors the notion that states in brittle simulators have large allowable covariances in particular directions, but very narrow permissible perturbations in other directions. Figure 1a and Figure 1b shows q φ has effectively learnedp(z t |x t−1), reducing rejection rate under q φ to less than 4% compared to approximately 75% under p. We then use the learned q φ as the proposal in a particle filter , an approximate inference algorithm often applied to posterior inference in time-series models. We use a fixed sample budget and hence failed samples are discarded, without retrying a new sample from the proposal. The in Figure 1c show that we are able to recover lower variance evidence approximations using q φ compared to p, achieving a paired t-test score of < 0.0001. This experiment confirms we are able to learn a proposal that incurs lower rejection, and that reducing the rejection rate increases fidelity of inference (for a fixed computational budget). We now apply our method to the robotics simulator MuJoCo , using the built-in example "tosser." MuJoCo allows some overlap between solid objects to simulate the contact dynamics. This is an example of model misspecification borne out of the requirements of reasonably writing a simulator. We therefore place a hard limit on the amount objects are allowed to overlap. We add Gaussian perturbations to the state. Results of the "tosser" experiment introduced in Section 3.2. 2a shows a typical state evolution. 2b shows the conditional autoregressive flow we learn markedly reduces the number of rejections. 2c shows the of performing sequential Monte Carlo using p and q φ. 2d shows the of performing hypothesis testing, where the correct hypothesis not selected using p, but is using q φ. Figure 2 shows the of this experiment. Collisions are generally rare events and hence the rejection rate of p is just 10%. Figure 2b shows that the autoregressive flow learns a proposal with a significantly lower rejection rate, reaching 3% rejection. However these rejections are concentrated in the critical regions of state-space and so this reduction yields an large reduction in the variance of the evidence approximation, as shown in Figure 2c. We conclude by applying our method to hypothesis testing, selecting the mass of the capsule. Shown in Figure 2d, using p in higher variance evidence approximations than when q φ is used, causing p to select the wrong model, with a reasonable level of significance (p = 0.125), while using q φ selects the correct hypothesis with p = 0.0127. In this paper we have tackled reducing simulator failures caused by naively perturbing the input state. We achieve this by defining these simulators as rejection samplers and learning a conditional autoregressive flow to estimate the state-dependent proposal distribution conditioned on acceptance. We show that using this learned proposal reduces the variance of inference when used as the proposal in a subsequent approximate inference scheme. This work has readily transferable practical contributions in the scientific community where naively modified simulation platforms are widely deployed. Journal of basic Engineering, 82 Appendix A. Background Deterministic simulators are often stochastically perturbed to increase the diversity of the achievable simulations and to fit data more effectively. White noise perturbation to time series systems is common, such as the widely used ARMA models . The most straightfoward example of this however is the widely used Kalman filter . The Kalman filter, at its core, is determinstic transition model which is then perturbed with additive Gaussian noise. The form of the process and noise kernels are chosen such the system has a closed form representation. Without the additive process noise, the Kalman filter is deterministic and would be unable to represent the variability in the real-world. More complex systems cannot be analyzed in closed form like the Kalman filter. Accordingly deterministic simulators of the dynamics with stochastic perturbations and numerical methods are used in practice. Specific examples of such systems that are: stochastic Hodgkin Huxley models of neural dynamics (; ; ;), computational finance analysis of asset prices (; ;), predator-prey dynamics , epidemiology and mobile robotics . As simulators become more complex, guaranteeing robustness is more difficult, and individual function evaluations are more expensive. and establish the sensitivity of earth science models to static input parameter values by building a discriminative classifer for parameters that induce failure. take an alternative approach instead treating simulator failure as an imputation problem, fitting a function regressor to predict the outcome of the failed experiment given the neighbouring experiments that successfully terminated. However these methods are limited by the lack of clear probabilistic interpretation in terms of the originally specified joint distribution in time series models and their ability to scale to high dimensions. Autoregressive flows (AFs) are a flexible class of density estimators. AFs define a density, q φ (x), trainable using stochastic gradient descent to approximate the target distribution p(x), by minimizing the KL-divergence between the target distribution and the approximation: AFs operate by transforming samples from a "base distribution" through a series of learned warpings, interpretable as a change of variables, into samples distributed according to the target distribution. The flow layers are designed such that the required Jacobians and inverses are cheaply computable. A popular flow structure is the masked autoencoder for distribution estimation , or MADE, that facilitates GPU-based parallelization. Multiple MADE blocks are used in masked autoregressive flows (MAF) overcoming the ordering dependency of autoregressive flows. AFs are also capable of learning conditional distributions by making the parameters of the flow dependent on the data using hypernetworks . We include here a more complete derivation of the presented in the main text. The overarching aim of this work is to develop a flexible proposal over perturbations that places minimal mass on perturbations that cause the simulator to not return a value, while not changing the originally specified model. Doing so reduces the wasted computational cost incurred by simulations failing, and also increases the effective sample size for a given sample budget. We consider deterministic models, expressed as simulators, describing the time-evolution of a state x t ∈ X, where we denote application of the simulator iterating the state as x t ← f (x t−1). However, brittle simulators fail for "invalid" inputs, which we denote as a return value of ⊥ (read as "bottom") from the simulator. Hence the complete definition of f is f: X → {X, ⊥}. We denote the region of valid inputs as X A ⊂ X, and the region of invalid inputs as X R ⊂ X, such that X A ∪ X R = X, where the boundary between these regions is unknown. Over the whole support, f defines a many-to-one function, as all X R maps to ⊥. However, the algorithm we go on to derive only requires that f is one-to-one in the accepted region. This is not uncommon in real simulators, and is satisfied by, for example, ODE models. A stochastic, additive perturbation to state, denoted z t ∈ X, is applied to induce a distribution over states. The distribution of this perturbation is denoted p(z t |x t−1), although, in practice, this distribution is often state independent. The iterated state is therefore calculated as x t ← f (x t−1 + z t). We define the random variable A t ∈ {0, 1} to denote whether the perturbation (as x t−1 is being conditioned on) is accepted. The naive approach to sampling from the perturbed system, shown in Algorithm 1, is to repeatedly sample from the proposal distribution and evaluate f until the simulator successfully exists. This procedure defines A t = I [f (x t−1 + z t) = ⊥], z t ∼ p(z t |x t−1), i.e. successfully iterated samples are accepted with certainty. This approach incurs significant wasted computation as the simulator must be called repeatedly, with failed iterations being discarded. Therefore the objective of this work is to derive a more efficient sampling mechanism. We begin by showing that Algorithm 1 defines a rejection sampler, with a specific form, targeting the space of successfully iterated states. This reasoning is illustrated in Figure 3. The behavior of f and the distribution p(z t |·) implicitly define a distribution over successfully iterated states. We denote this "target" distribution as p(x t |x t−1) = p(x t |x t−1, A t = 1), where the bar indicates that the sample was accepted, and hence places no probability mass on failures. Note there is no bar on p(z t |x t−1) above, indicating that it is defined Algorithm 1 Sampling from a brittle simulator. Data: Current state x t−1, brittle simulator f, perturbation proposal p(z t |x t−1). Result: Iterated state x t and perturbation z t. Figure 3: Graphical representation of how a brittle deterministic simulator acts as a rejection sampler, targetingp(z t |x t−1). For clarity here we assume x t=1 = 0 and z t is independent of x t. The simulator, f (z t), returns ⊥ for some unknown input regions, shown in green. The proposal over z t is shown in blue. The target distribution,p(z t), shown in orange, is implicitly defined asp(z t) = 1 Mp p(z t)I [f (z t) = ⊥], where M p is the normalizing constant from p, equal to the acceptance rate. Accordingly the proposal distribution, scaled by M p, is exactly equal top(z t) in the accepted region. Therefore sampling from p until f successfully exits, as in Algorithm 1, can be seen as constructing a rejection sampler with proposal p(z t), and acceptance ratio,p with no knowledge of the accept/reject behaviors of f and hence probability mass may be placed on regions that yield failure. The functional form ofp is unavailable, and the density cannot be evaluated for any input value. Importantly,p is the distribution specified a-priori by the modeler, sampled from by the entire simulation pipeline, and hence any algorithm we develop must also targetp(x t |x t−1). The existence of p(x t |x t−1) implies the existence of a second distribution: the distribution over accepted perturbations, denoted p(z t |x t−1). Note that this distribution is also conditioned on acceptance under the chosen simulator indicated be the presence of a bar. We assume f is one-to-one in the accepted regime, and so the change of variables rule can be applied to directly relate this to p(x t |x t−1). Under our initial algorithm for sampling from a brittle simulator we can therefore write the following identity: where the normalizing constant M p is the acceptance rate under p. By inspecting, accepting with certainty perturbations that exit successfully can be seen as proportionally shifting mass from regions of p where the simulator does not exit to regions where it does. In a rejection sampler, the probability of accepting a proposed sample is proportional to the ratio between the target distribution and the proposal, scaled by a constant such that: As we have already stated, we cannot evaluate the target density, but we can establish if the density is non-zero (indicated by the simulator not failing). A sufficient condition to ensure the correctness of a rejection sampler in this scenario is that the proposal density is proportional to the target density wherever the target density has support. Applying this condition to our scenario implies that if the simulator fails, the density under the target distribution is known to be zero and the sample should be rejected with certainty, regardless of the density under the proposal distribution. In the accepted region, the sample should be accepted with probability p(z t |·)/M p(z t |·), where M is selected to satisfy. However, from, it can be seen p ∝ p hence proposal and target are proportional irrespective of the choice of M, and the value of M p, satisfying the above criteria. The acceptance rule of the rejection sampler is therefore reduced to I [f (x t−1 + z t) = ⊥]. Importantly, we do not need to evaluate M p, M, orp to use Algorithm 1 as a valid rejection sampler. This simple probabilistic interpretation of the behavior of the simulation process enables us to establish as a definition ofp valid across the entire input domain of f -a definition we now exploit to learn an efficient proposal. We now derive how we can learn the proposal distribution, denoted q φ parameterized by φ, to replace p, such that the acceptance rate under q φ (denoted M q φ) tends towards unity, minimizing wasted computation, while also retaining the same joint distribution as the originally specified model. We denote q φ as the proposal we train, which, coupled with the simulator, implicitly define a proposal over accepted samples, denoted q φ. Expressing this mathematically, we wish to minimize the distance between joint distribution implicitly specified over accepted iterated states using the a-priori specified proposal distribution p, and the joint distribution defined implicitly as q φ: where we select the Kullback-Leibler (KL) divergence as the metric of distance between distributions. The outer expectation defines this objective as amortized across state space. As is standard in amortized and compiled inference methods we can generate the samples by directly sampling from the model . We eventually perform this minimization using stochastic gradient descent, and so this expectation defines the distribution from which we sample the minibatches used, and so we drop this expectation for compactness. Expanding the KL term yields: Noting that q φ and p are defined only on accepted samples, where f is one-to-one, we can apply a change of variables defined for q φ as: and likewise for p. This transforms the distribution over x t into a distribution over z t and a Jacobian term: Noting that the same Jacobian terms appear in the numerator and denominator we are able to cancel these: taking care to also apply the change variables in the distribution we are sampling from in. We can now discard the remaining p term as it is independent of φ, and noting that f −1 (x t) = x t−1 + z t we can write: It can now be read off that minimizing the KL stated in can be performed by setting q φ (z t |x t−1) equal to p(z t |x t−1). Had we have discarded p a step earlier, we would have been unable to eliminate the Jacobian terms inside the logarithm. However, this distribution is defined after rejection sampling, and can only be defined as in: denoting M q φ as the acceptance rate under q φ. Note again that q φ is not dependent on the accept/reject characteristics of f. Differentiation of M q φ is intractable. Further, there is an infinite family of q φ proposals that yield p = q φ, that have non-zero rejection rates. However, we observe that φ * is a maximizer for both q φ andq φ in the limit of zero rejection, and so we can instead optimize q φ (z t |x t−1): with no consideration of the rejection behavior. Additionally, by removing the rejection sampler as we have, we have implicitly specified that the proposal distribution must have an acceptance rate of one. This term is differentiable with respect to φ and so we can maximize this quantity using stochastic gradients, where samples from the outer expectation over x t−1 defines the distribution from which we sample minibatches from. This expression shows that we can learn the distribution over accepted x t values by learning the distribution over z t, without needing to calculate the Jacobian or inverse of the transformation defined by f. We can now perform density estimation on the accepted samples from the a-priori specified rejection sampler to learn a proposal for accepted x t samples, thus minimizing wasted computation, targeting the same overall joint distribution, and retaining interpretability by utilizing the simulator. In this section we include additional figures and experimental details for the annulus and MuJoCo experiment presented in the main text, along with an additional two experiments. Our first additional example uses a simulator of balls elastically bouncing in a square enclosure, as shown in Figure 4a. The dimensionality of the state vector, x t, is four times the number of balls -the x-y coordinate and velocity of the centre of mass, per ball. We add a small amount of Gaussian noise at each iteration to the position and velocity of each ball. This perturbation induces the possibility that two balls overlap, or, a ball intersects with the wall. Both of these represent invalid physical arrangements and so the simulator returns ⊥ for such configurations. We note that here, we are conditioning on the state of both balls simultaneously, and proposing the perturbation to the state jointly. Figure 4 displays the of this experiment. Figure 5 shows the distribution over x-y perturbations of a single ball, conditioned on the other ball being static and stationary. Green contours show the perturbations learned by autoregressive flow such that failure is not induced. In Figure 4c we plot the rejection rate under p and q φ as a function of the position of the first ball, with the second ball fixed in the position shown, showing that rejection has been all but eliminated. We again see a reduction in the variance of the evidence approximation when comparing p and q in an SMC scheme, as shown in Figure 4d. This example demonstrates the applicability of the autoregressive flow; but also demonstrates how a seemingly simple simulator becomes brittle when naively perturbed. shows the rejection rate as a function of the position of the first ball, with the second ball in the position shown. The trained proposal (right) has all but eliminated rejection in the permissible space compared to the a-priori specified proposal (left). The rejection rate under p is much higher in the interior as the second ball may also leave the enclosure, whereas q φ has practically eliminated rejection by jointly proposing perturbations. 4d shows the reduction in variance achieved by using q φ. Although the reduction appears more modest compared to, say, the annulus example, it still achieves a paired t-test score of < 0.0001, indicating a strong level of statistical significance. We conclude by applying our algorithm to a simulator for the widely studied Caenorhabditis elegans roundworm. WormSim, presented by , is a simulator of the body of the worm, driven by a surrogate for the true neural architecture of Caenorhabditis elegans, and uses a 510 dimensional state representation. We apply perturbations to the 98 dimensional subspace defining the 49 x-y coordinate pairs physical position of the worm, while conditioning on the full 510 dimensional state vector. The expected rate of failure increases sharply as a function of the scale of the perturbation applied, as shown in Figure 6a, as the integrator used in WormSim is unable to integrate highly perturbed states. We then train a autoregressive flow targetingp, where the rejection rate during training is shown in Figure 6b. We see that we are able to learn an autoregressive flow with lower rejection rates, reaching approximately 53% rejection, for a p with approximately 75% rejection. Although the rejection rate is higher than ultimately desired, we include this example to show how rejections occur in real simulators through integrator failure. We believe larger flows with regularized parameters can reduce the rejection rate further. Figure 5: Shown is the a-priori specified proposal distribution, p, over the perturbation to the position of the first ball specified in the model, and the learned proposal, q φ in green, for the bouncing balls experiment introduced in Section C.1. The edge of the permissible region of the enclosure is shown as a black dashed line. The second ball is fixed at, and the ing invalid region induced shaded. The flow has learned to deflect away from the disallowed regions. x t ← x t−1 + δt ×ẋ t−1 + z xt, z xt ∼ N (0, 0.1), y t ← y t−1 + δt ×ẏ t−1 + z yt, z xt ∼ N (0, 0.1), x t ←ẋ t−1 + zẋ t, z xt ∼ N (0, 0.1), y t ←ẏ t−1 + zẏ t, z yt ∼ N (0, 0.1), Failure is defined as the change in radius being greater than 0.03. To compute the variances of the SMC sweep we generate 100 random traces. We then perform 50 SMC sweeps per trace, using 100 particles, and compute the evidence. We use the configuration "tosser" included in , only modifying it by removing the second unused bucket. We use completely standard simulation configuration. We introduce the limit on overlap leveraging MuJoCos in-built collision detection, rejecting overlaps above 0.005. Typical overlaps in the standard execution of MuJoCo are below this limit. An integration time of 0.002 is used. We observe only the x-y position of the capsule with Gaussian distributed noise, with standard deviation 0.1. We perturb the x-y position and velocity of the capsule with Gaussian distributed noise, with standard deviation 0.005 and 0.1 respectively. We perturb the angle and angular velocity of the capsule with Gaussian distributed noise, with standard deviation 0.05 and 0.05 respectively. These values were chosen to be in line with typical simulated values in the tosser example. We place a prior over the initial position and velocity with standard deviation 0.01 for positions and 0.1 for velocities, and mean equal to their true position. In this, the state input to the normalizing flow is the position and angle, and derivatives, of the capsule, as well as the state of the actuator. The actuators state is unobserved and is not perturbed under the model. We also input time into the normalizing flow as the control dynamics are not constant with time. To compute the variances of the SMC sweep we generate 50 random traces. We then perform 20 SMC sweeps per trace, using 100 particles, and compute the evidence.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJecKyhEKr
We learn a conditional autoregressive flow to propose perturbations that don't induce simulator failure, improving inference performance.
Multi-hop question answering requires models to gather information from different parts of a text to answer a question. Most current approaches learn to address this task in an end-to-end way with neural networks, without maintaining an explicit representation of the reasoning process. We propose a method to extract a discrete reasoning chain over the text, which consists of a series of sentences leading to the answer. We then feed the extracted chains to a BERT-based QA model to do final answer prediction. Critically, we do not rely on gold annotated chains or ``supporting facts:'' at training time, we derive pseudogold reasoning chains using heuristics based on named entity recognition and coreference resolution. Nor do we rely on these annotations at test time, as our model learns to extract chains from raw text alone. We test our approach on two recently proposed large multi-hop question answering datasets: WikiHop and HotpotQA, and achieve state-of-art performance on WikiHop and strong performance on HotpotQA. Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way. Furthermore, human evaluation shows that our extracted chains allow humans to give answers with high confidence, indicating that these are a strong intermediate abstraction for this task.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
ByxDJyHYPS
We improve answering of questions that require multi-hop reasoning extracting an intermediate chain of sentences.
Normalizing constant (also called partition function, Bayesian evidence, or marginal likelihood) is one of the central goals of Bayesian inference, yet most of the existing methods are both expensive and inaccurate. Here we develop a new approach, starting from posterior samples obtained with a standard Markov Chain Monte Carlo (MCMC). We apply a novel Normalizing Flow (NF) approach to obtain an analytic density estimator from these samples, followed by Optimal Bridge Sampling (OBS) to obtain the normalizing constant. We compare our method which we call Gaussianized Bridge Sampling (GBS) to existing methods such as Nested Sampling (NS) and Annealed Importance Sampling (AIS) on several examples, showing our method is both significantly faster and substantially more accurate than these methods, and comes with a reliable error estimation. Normalizing constant, also called partition function, Bayesian evidence, or marginal likelihood, is the central object of Bayesian methodology. Despite its importance, existing methods are both inaccurate and slow, and may require specialized tuning. One such method is Annealed Importance Sampling (AIS), and its alternative, Reverse AIS (RAIS), which can give stochastic lower and upper bounds to the normalizing constant, bracketing the true value . However, as the tempered distribution may vary substantially with temperature, it can be expensive to obtain good samples at each temperature, which can lead to poor estimates . Nested sampling (NS) is another popular alternative , which can be significantly more expensive than standard sampling methods in higher dimensions but, as we show, can also lead to very inaccurate estimates. Moreover, there is no simple way to know how accurate the estimate is. Here we develop a new approach to the problem, combining Normalizing Flow (NF) density estimators with Optimal Bridge Sampling (OBS). In a typical Bayesian inference application, we first obtain posterior samples using one of the standard Markov Chain Monte Carlo (MCMC) methods. In our approach we use these samples to derive the normalizing constant with relatively few additional likelihood evaluations required, making the additional cost of normalizing constant estimation small compared to posterior sampling. All of our calculations are run on standard CPU platforms, and will be available in the BayesFast Python package. Let p(x) and q(x) be two possibly unnormalized distributions defined on Ω, with normalizing constants Z p and Z q. For any function α(x) on Ω, we have if the integral exists. Suppose that we have samples from both p(x) and q(x), and we know Z q, then Equation gives which is the Bridge Sampling estimation of normalizing constant . It can be shown that many normalizing constant estimators, including Importance Sampling and Harmonic Mean, are special cases with different choices of bridge function α(x) . For a given proposal function q(x), an asymptotically optimal bridge function can be constructed, such that the ratio r = Z p /Z q is given by the root of the following score function equation where n p and n q are the numbers of samples from p(x) and q(x). For r ≥ 0, S(r) is monotonic and has a unique root, so one can easily solve it with e.g. secant method. This estimator is optimal, in the sense that its relative mean-square error is minimized . Choosing a suitable proposal q(x) for Bridge Sampling can be challenging, as it requires a large overlap between q(x) and p(x). One approach is Warp Bridge Sampling (WBS) , which transforms p(x) to a Gaussian with linear shifting, rescaling and symmetrizing. As we will show, this approach can be inaccurate or even fail completely for more complicated probability densities. As stated above, an appropriate proposal q(x) which has large overlap with p(x) is required for OBS to give accurate . In a typical MCMC analysis we have samples from the posterior, so one can obtain an approximate density estimation q(x) from these samples using a bijective NF. In this approach one maps p(x) to an unstructured distribution such as zero mean unit variance Gaussian N (0, I). For density evaluation we must also keep track of the Jacobian of transformation |dΨ/dx|, so that our estimated distribution is q(x) = N (0, I)|dΨ/dx|, where Ψ(x) is the transformation. The probability density q(x) is normalized, so we know Z q = 1. There have been various methods of NF recently proposed in machine learning literature ((Dinh et al.,, 2016), which however failed on several examples we present below. Moreover, we observed that training with these is very expensive and can easily dominate the overall computational cost. For these reasons we instead develop Iterative Neural Transform (INT), a new NF approach, details of which will be presented elsewhere. It is based on combining optimal transport and information theory, repeatedly finding and transforming one dimensional marginals that are the most deviant between the target and proposal (Gaussian) distributions. After computing dual representation of Wasserstein-1 distance to find the maximally non-Gaussian directions, we apply a bijective transformation that maximizes the entropy along these directions. For this we use a non-parametric spline based transformation that matches the 1-d cumulative distribution function (CDF) of the data to a Gaussian CDF, where kernel density estimation (KDE) is used to smooth the probability density marginals. We found that using a fixed number of 5 to 10 iterations is sufficient for evidence estimation, and the computational cost of our NF density estimation is small when compared to the cost of sampling. We propose the following Gaussianized Bridge Sampling (GBS) approach, which combines OBS with NF density estimation. In our typical application, we first run No-U-Turn Sampler (NUTS) to obtain 2n p samples from p(x) if its gradient is available, while affine invariant sampling can be used in the gradient-free case. To avoid underestimation of Z p , these 2n p samples are divided into two batches, and we fit INT with the first batch of n p samples to obtain the proposal q(x). Then we draw n q samples from q(x) and evaluate their corresponding p(x), where n q is determined by an adaptive rule (see Appendix B.4). We solve for the normalizing constant ratio r with Equation, using these n q samples from q(x) and the second batch of n p samples from p(x) (also evaluating their corresponding q(x)), and report the in form of ln Z p, with its error approximated by the relative mean-square error of Z p given in Equation . We used four test problems to compare the performance of various estimators. See Appendix A and B for more details of the examples and algorithms. The 16-d Funnel example is adapted from. The funnel structure is common in Bayesian hierarchical models, and in practice it is recommended to reparameterize the model to overcome the pathology . Here we stick to the original parameterization for test purpose. The 32-d Banana example comes from a popular variant of multidimensional Rosenbrock function , which is composed of 16 uncorrelated 2-d bananas. In addition, we apply a random 32-d rotation to the bananas, which makes all the parameters correlated with each other. The 48-d Cauchy example is adapted from the LogGamma example in;. In contrast to the original example, where the mixture structure only exists in the first two dimensions, we place a mixture of two heavy-tailed Cauchy distributions along every dimension. The 64-d Ring example has strong non-linear correlation between the parameters, as the marginal distribution of every two successive parameters is ring-shaped. See Figure 1 for a comparison of the estimators. For all of the four test examples, the proposed GBS algorithm gives the most accurate and a valid error estimation. We use NS as implemented in dynesty with its default settings. For all other cases, we use NUTS as the MCMC transition operator. We chose to run (R)AIS with equal number of evaluations as our GBS, but as seen from Figure 1 this number is inadequate for (R)AIS, which needs about 10-100 times more evaluations to achieve sufficient accuracy (see Appendix B.3). In contrast, if we run GBS with 4 times fewer evaluations (Gaussianized Bridge Sampling Lite, GBSL), we achieve an unbiased with a larger error than GBS, but still smaller than other estimators. For comparison we also show replacing OBS with IS (GIS) or HM (GHM), while still using INT for q(x). Although GIS and GHM are better than NS or (R)AIS, they are worse than GBS(L), highlighting the importance of OBS. Finally, we also compare to WBS, which uses a very simple proposal distribution q(x), and fails on several examples, highlighting the importance of using a more expressive NF for q(x). For our GBS(L), most of evaluation time is used to get the posterior samples with standard MCMC, which is a typical Bayesian inference goal, and the additional cost to evaluate evidence is small compared to the MCMC (see Appendix B.4). In contrast, Thermodynamic Integration (TI) or (R)AIS is more expensive than posterior sampling, since the chains need to be accurate at every intermediate state . The same comment applies to NS, which is more expensive than the MCMC approaches we use here for posterior analysis, especially when non-informative prior is used. We present a new method to estimate the normalizing constant (Bayesian evidence) in the context of Bayesian analysis. Our starting point are the samples from the posterior using standard MCMC based methods, and we assume that these have converged to the correct probability distribution. In our approach we combine OBS with INT, a novel NF based density estimator, showing on several high dimensional examples that our method outperforms other approaches in terms of accuracy and computational cost, and provides a reliable error estimate. The model likelihood is with flat prior x 1 ∼ U(−4, 4), x 2:n ∼ U(−30, 30). We use ln Z p = −63.4988 as the fiducial value, and the corner plot of the first four dimensions is shown in Figure 2. The model likelihood is with flat prior U(−15, 15) on all the parameters. The rotation matrix A is generated from a random sample of SO(n), and the same A is used for all the simulations. We use ln Z p = −127.364 as the fiducial value, and the corner plot of the first four dimensions, without or with the random rotation, is shown in Figure 3. The strong degeneracy can no longer be identified in the plot once we apply the rotation, however it still exists and hinders most estimators from getting reasonable . The model likelihood is with flat prior U(−100, 100) on all the parameters. We use ln Z p = −254.627 as the fiducial value, and the corner plot of the first four dimensions is shown in Figure 4. The model likelihood is, a = 2, b = 1, n = 64, with flat prior U(−5, 5) on all the parameters. We use ln Z p = −114.492 as the fiducial value, and the corner plot of the first four dimensions is shown in Figure 5. We use dynamic NS implemented in dynesty, which is considered more efficient than static NS. Traditionally, NS does not need the gradient of the likelihood, at the cost of lower sampling efficiency in high dimensions. Since analytic gradient of the four examples is available, we follow dynesty's default setting, which requires the gradient to perform Hamitonian Slice Sampling for dimensions d > 20. While for dimensions 10 ≤ d ≤ 20, random walks sampling is used instead. dynesty also provides an error estimate for the evidence; see for details. For (R)AIS, we divide the warm-up iterations of NUTS into two equal stages, and the (flat) prior is used as the base density. In the first stage, we set β = 0.5 and adapt the mass matrix and step size of NUTS, which acts as a compromise between the possibly broad prior and narrow posterior. In the second stage, we set β = 0 (β = 1) for AIS (RAIS) to get samples from the prior (posterior). After warm-up, we use the following sigmoidal schedule to perform annealing, where σ denotes the logistic sigmoid function and we set δ = 4 . We use 1,000 warm-up iterations for all the four examples, and adjust the number of states T so that it needs roughly the same number of evaluations as GBS in total. The exact numbers are listed in Table 1. We run 16 chains for each case, and average reported ln Z p of different chains, which gives a stochastic lower (upper) bound for AIS (RAIS) according to Jensen's inequality. The uncertainty is estimated from the scatter of different chains, and should be understood as the error of the lower (upper) bound of ln Z p, instead of ln Z p itself. Funnel Banana Cauchy Ring AIS 800 2000 3000 3500 RAIS 700 1500 2500 3000 Table 1: The number of states T used by (R)AIS. Using the mass matrix and step size of NUTS adapted at β = 0.5, and the prior as base density, may account for the phenomenon that RAIS failed to give an upper bound in the Banana example: the density is very broad at high temperatures and very narrow at low temperatures, which is difficult for samplers adapted at a single β. One may remedy this issue by using a better base density that is closer to the posterior, but this will require delicate hand-tuning and is beyond the scope of this paper. While the upper (lower) bounds of (R)AIS are valid in the limit of a very large number of samples, achieving this limit may be extremely costly in practice. The remaining normalizing constant estimators require a sufficient number of samples from p(x), which we obtain with NUTS. For WBS, GBS, GIS and GHM, we run 8 chains with 2,500 iterations for the Funnel and Banana examples, and 5,000 iterations for the Cauchy and Ring examples, including the first 20% warm-up iterations, which are removed from the samples. Then we fit INT using 10 iterations for GBS, GIS and GHM, whose computation cost (a few seconds for the Funnel example) is small or negligible relative to NUTS sampling, and does not depend on the cost of ln p(x) evaluations. For GBSL, the number of NUTS chains, NUTS iterations and INT iterations are all reduced by half, leading to a factor of four decrease in the total computation cost. The relative mean-square error of OBS is minimized and given by where np+nq. Here p (x) = p(x)/Z p and q(x) should be normalized densities. We assume the samples from q(x) are independent, whereas the samples from p(x) may be autocorrelated, and τ f 2 is the integrated autocorrelation time of f 2 (x p) (Frühwirth-), which is estimated by the autocorr module in emcee . Analogous expressions can be derived for IS and HM, The claimed uncertainty in Figure 1 is obtained by assuming that the error is Gaussian distributed. There can be different strategies to allocate samples for BS. In the literature, it is recommended that one draws samples from p(x) and q(x) based on equal-sample-size or equal-time allocation . Since NUTS based sampling usually requires at least hundreds of evaluations to obtain one effective sample from p(x) in high dimensions , which is orders of magnitude more expensive than our NF based sampling for q(x), it could be advantageous to set n q > n p. Throughout this paper, the following adaptive strategy is adopted to determine n q for GBS(L). After obtaining 2n p samples from p(x), we divide them into two equal batches, which will be used for fitting the proposal q(x) and evaluating the evidence, respectively. As an starting point, we draw n q,0 = n p samples from q(x) and estimate the error of OBS using Equation. Note that the right side of Equation is composed of two terms, and only the first term will decrease as one increases n q but fixes n p. Assuming that current samples provide an accurate estimate of the variance and expectation terms in Equation, one can solve for n q such that f err, the fraction of q(x) contributions in Equation, is equal to some specified value, which we set to 0.1. Since the n q,0 samples from q(x) can be reused, if n q < n q,0, no additional samples are required and we set n q = n q,0. On the other hand, we also require that f eva, the fraction of p(x) evaluations that are used for the q(x) samples, is no larger than 0.1, although this constraint is usually not activated in practice. We use 0.1 as the default values of f err and f eva, so that the additional cost of evidence evaluation is small relative to the cost of sampling, while using a larger n q alone can no longer significantly improve the accuracy of normalizing constant. However, if one wants to put more emphasis on posterior sampling (evidence estimation), a larger (smaller) f err and/or smaller (larger) f eva can be used. In principle, it is also possible to use different number of p(x) samples to fit the proposal and evaluate the evidence, in contrast to equal split used in , which we leave for feature research. For GIS and WBS, we use the same n q as solved for GBS(L). No samples from p(x) are required to estimate normalizing constant for GIS, so in this case all the 2n p samples will be used to fit INT. While for GHM, no samples from q(x) are required. Note that for WBS, the additional p(x) evaluations required for evidence estimation is n p + 2n q instead of n q, which comes from the symmetrization of ln p(x).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkxKFJ2NtS
We develop a new method for normalization constant (Bayesian evidence) estimation using Optimal Bridge Sampling and a novel Normalizing Flow, which is shown to outperform existing methods in terms of accuracy and computational time.
We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning. A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios. As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF. Our clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models. This article is in the context of sequential or incremental learning in Deep Neural Networks (DNNs). Essentially, this means that a DNN is not trained once, on a single task D, but successively on two or more sub-tasks D 1,..., D n, one after another. Learning tasks of this type, which we term Sequential Learning Tasks (SLTs) (see FIG0), are potentially very common in real-world applications. They occur wherever DNNs need to update their capabilities on-site and over time: gesture recognition, network traffic analysis, or face and object recognition in mobile robots. In such scenarios, neural networks have long been known to suffer from a problem termed "catastrophic forgetting"(CF) (e.g., BID7) which denotes the abrupt and near-complete loss of knowledge from previous subtasks D 1,..., D k−1 after only a few training iterations on the current sub-task D k (see FIG0 compared to FIG0). We focus on SLTs from the visual domain with two sub-tasks each, as DNNs show pronounced CF behavior even when only two sub-tasks are involved. The sequential learning tasks used in this study only have two sub-tasks: D1 and D2. During training (white ) and re-training (gray ), test accuracy is measured on D1 (blue,), D2 (green,) and D1 ∪ D2 (red,). The blue curve allows to determine the presence of CF by simple visual inspection: if there is significant degradation w.r.t. the red curve, then CF has occurred. DISPLAYFORM0 The field of incremental learning is large, e.g., BID20 and BID8. Recent systematic comparisons between different DNN approaches to avoid CF are performed in, e.g., BID23 or. Principal recent approaches to avoid CF include ensemble methods BID22 BID6, dual-memory systems BID24 BID11 BID21 BID9 and regularization approaches. Whereas BID10 suggest Dropout for alleviating CF, the EWC method BID14 proposes to add a term to the energy function that protects weights that are important for the previous sub-task (s). Importance is determined by approximating the Fisher information matrix of the DNN. A related approach is pursued by the Incremental Moment Matching technique (IMM) (see), where weights from DNNs trained on a current and a past sub-tasks are "merged" using the Fisher information matrix. Other regularization-oriented approaches are proposed in BID2; BID25 and BID13 which focus on enforcing sparsity of neural activities by lateral interactions within a layer. Number of tested datasets In general, most methods referenced here are evaluated only on a few datasets, usually on MNIST BID16 and various derivations thereof (permutation, rotation, class separation). Some studies make limited use of CIFAR10, SVHN, the Amazon sentiment analysis problem, and non-visual problems such as data from Q-learning of Atari games. A largescale evaluation on a huge number of qualitatively different datasets is still missing 1. Model selection and prescience Model selection (i.e., selecting DNN topology and hyperparameters) is addressed in some approaches BID10 but on the basis of a "prescient" evaluation where the best model is selected after all tasks have been processed, an approach which is replicated in BID14. This amounts to a knowledge of future sub-tasks which is problematic in applications. Most approaches ignore model selection BID25 BID2 BID13, and thus implicitly violate causality. Storage of data from previous sub-tasks From a technical point of view, DNNs can be retrained without storing training data from previous sub-tasks, which is done in BID10 and BID25. For regularization approaches, however, there are regularization parameters that control the retention of previous knowledge, and thus must be chosen with care. In BID14, this is λ, whereas two such quantities occur in: the "balancing" parameter α and the regularization parameter λ for L2-transfer. The only study where regularization parameters are obtained through cross-validation (which is avoided in other studies) is BID2 (for λ SN I and λ Ω) but this requires to store all previous training data. This review shows that enormous progress has been made, but that there are shortcomings tied to applied scenarios which need to be addressed. We will formalize this in Sec. 1.2 and propose an evaluation strategy that takes these formal constraints into account when testing CF in DNNs. When training a DNN model on SLTs, first of all the model must be able to be retrained at any time by new classes (class-incremental learning). Secondly, it must exhibit retention, or at least graceful decay, of performance on previously trained classes. Some forgetting is probably unavoidable, but it should be gradual and not immediate, i.e., catastrophic. However, if a DNN is operating in, e.g., embedded devices or autonomous robots, additional conditions may be applicable: Low memory footprint Data from past sub-tasks cannot be stored and used for re-training, or else to determine when to stop re-training. Causality Data from future sub-tasks, which are often known in academic studies but not in applications, must not be utilized in any way, especially not for DNN model selection. This point might seem trivial, but a number of studies such as BID14; BID10 and BID25 perform model selection in hindsight, after having processed all sub-tasks. Constant update complexity Re-training complexity (time and memory) must not depend on the number of previous sub-tasks, thus more or less excluding replay-based schemes such as BID24. Clearly, even if update complexity is constant w.r.t. the number of previous sub-tasks, it should not be too high in absolute terms either. The original contributions of our work can be summarized as follows:• We propose a training and evaluation paradigm for incremental learning in DNNs that enforces typical application constraints, see Sec. 1.2. The importance of such an applicationoriented paradigm is underlined by the fact that taking application constraints into account leads to radically different about CF than those obtained by other recent studies on CF (see Sec. 1.1).• We investigate the incremental learning capacity of various DNN approaches (Dropout, LWTA, EWC and IMM) using the largest number of qualitatively different classification datasets so far described. We find that all investigated models are afflicted by catastrophic forgetting, or else in violation of application constraints and discuss potential workarounds.• We establish that the "permuted" type of SLTs (e.g., "permuted MNIST") should be used with caution when testing for CF.• We do not propose a method for avoiding CF in this article. This is because avoiding CF requires a consensus on how to actually measure this effect: our novel contribution is a proposal how to do just that. We collect a large number of visual classification datasets, from each of which we construct SLTs according to a common scheme, and compare several recent DNN models using these SLTs. The experimental protocol is such that application constraints, see Sec. 1.2, are enforced. For all tested DNN models (see below), we use a TensorFlow (v1.7) implementation under Python (v3.4 and later). The source code for all processed models, the experiment-generator and evaluation routine can be found on our public available repository 2.FC A normal, fully-connected (FC) feed-forward DNN with a variable number and size of hidden layers, each followed by ReLU, and a softmax readout layer minimizing cross-entropy. CONV A convolutional neural network (CNN) based on the work of BID4. It is optimized to perform well on image classification problems like MNIST. We use a fixed topology: two conv-layers with 32 and 64 filters of size 5 × 5 plus ReLU and 2 × 2 max-pooling, followed by a fc-layer with 1024 neurons and softmax readout layer minimizing a cross-entropy energy function. EWC The Elastic Weight Consolidation (EWC) model presented by BID14. LWTA A fully-connected DNN with a variable number and size of hidden layers, each followed by a Local Winner Takes All (LWTA) transfer function as proposed in BID25. IMM The Incremental Moment Matching model as presented by. We examine the weight-transfer techniques in our experiments, using the provided implementation. D-FC and D-CONV Motivated by BID10 we combine the FC and CONV models with Dropout as an approach to solve the CF problem. Only FC and CONV are eligible for this, as EWC and IMM include dropout by default, and LWTA is incompatible with Dropout. We perform model selection in all our experiments by a combinatorial hyper-parameter optimization, whose limits are imposed by the computational resources available for this study. In particular, we vary the number of hidden layers L ∈ {2, 3} and their size S ∈ {200, 400, 800} (CNNs excluded), the learning rate 1 ∈ {0.01, 0.001} for sub-task D 1, and the re-training learning rate 2 ∈ {0.001, 0.0001, 0.00001} for sub-task D 2. The batch size (batch size) is fixed to 100 for all experiments, and is used for both training and testing. As in other studies, we do not use a fixed number of training iterations, but specify the number of training epochs (i.e., passes through the whole dataset) as E = 10 for each processed dataset (see Sec. 2.2), which allows an approximate comparison of different datasets. The number of training/testing batches per epoch, B, can be calculated from the batch size and the currently used dataset size. The set of all hyper-parameters for a certain model, denoted P, is formed as a Cartesian product from the allowed values of the hyperparameters L, S, 1, 2 and complemented by hyper-parameters that remain fixed (E, batch size) or are particular to a certain model. For all models that use dropout, the dropout rate for the input layer is fixed to 0.2, and to 0.5 for all hidden layers. For CNNs, the dropout rate is set to 0.5 for both input and hidden layers. All other hyper-parameters for CNNs are fixed, e.g., number and size of layers, the max-pooling and filter sizes and the strides (2 × 2) for each channel. These decisions were made based on the work of BID10. The LWTA block size is fixed to 2, based on the work of BID25. The model parameter λ for EWC is set to λ1/ 2 (set but not described in the source code of BID14). For all models except IMM, the momentum parameter for the optimizer is set to µ = 0.99 BID26. For the IMM models, the SGD optimizer is used, and the regularizer value for the L2-regularization is set to 0.01 for L2-transfer and to 0.0 for weight transfer. We select the following datasets (see Tab. 1). In order to construct SLTs uniformly across datasets, we choose the 10 best-represented classes (or random classes if balanced) if more are present. MNIST FORMULA0 BID16 ) is the common benchmark for computer vision systems and classification problems. It consist of gray scale images of handwritten digits.EMNIST BID5 is an extended version of MNIST with additional classes of handwritten letters. There are different variations of this dataset: we extract the ten best-represented classes from the By Class variation containing 62 classes. Fruits 360 BID18 ) is a dataset comprising fruit color images from different rotation angles spread over 75 classes, from which we extract the ten best-represented ones. Devanagari BID1 contains gray scale images of Devanagari handwritten letters. From the 46 character classes (1.700 images per class) we extract 10 random classes. FashionMNIST BID27 consists of images of clothes in 10 classes and is structured like the MNIST dataset. We use this dataset for our investigations because it is a "more challenging classification task than the simple MNIST digits data BID27 ". SVHN BID19 ) is a 10-class dataset based on photos of house numbers. We use the cropped digit format, where the number is centered in the color image. CIFAR10 BID15 ) contains color images of real-world objects e.g, dogs, airplanes etc. NotMNIST (Bulatov Yaroslav) contains grayscale images of the 10 letter classes from "A" to "J", taken from different publicly available fonts. DISPLAYFORM0 is a modified version of the "Arabic Digits dataBase", containing grayscale images of handwritten digits written by 700 different persons. Tab. 2). For the latter case, we include SLTs where the second sub-task adds only 1 class (D9-1 type SLTs) or 5 classes (D5-5 type SLTs), since CF may be tied to how much newness is introduced. We include permutations (DP10-10) since we suspected that this type of SLT is somehow much easier than others, and therefore not a good incremental learning benchmark. As there are far more ways to create D5-5 type SLTs than D9-1 type SLTs, we create more of the former (8-vs-3) in order to avoid misleading due to a particular choice of subdivision, whereas we create only a single permutation-type SLT. Table 2: Overview of all SLTs. The assignment of classes to sub-tasks D1 and D2 are disjunct, except for DP10-10 where two different seeded random image permutations are applied. SLT → D5-5a D5-5b D5-5c D5-5d D5-5e D5-5f D5-5g D5-5h D9-1a D9-1b D9-1c DP10-10 D 1 0-4 0 2 4 6 8 3 4 6 8 9 0 2 5 6 7 0 1 3 4 5 0 3 4 8 9 0 5 6 7 8 0 2 3 6 8 0-8 1-9 0 2-9 0-9 D 2 5-9 1 3 5 7 9 0 1 2 5 7 1 3 4 8 9 2 6 7 8 9 1 2 5 6 7 1 2 3 4 9 1 4 5 7 9 9 0 1 0-9 This study presents just one, albeit very large, experiment, whose experimental protocol implements the constraints from Sec. 1.2.Every DNN model from Sec. 2 is applied to each SLT as defined in Sec. 2.3 while taking into account model selection, see Sec. 2.1. A precise definition of our application-oriented experimental protocol is given in Alg. 1. For a given model m and an SLT (D 1 and D 2), the first step is to determine the best hyper-parameter vector p * for sub-task D 1 only (see lines 1-4), which determines the model m p * used for re-training. In a second step, m p * (from line 5) is used for re-training on D 2, with a different learning rate 2 which is varied separately. We introduce two criteria for determining the (2 -dependent) quality of a re-training phase (lines 6-10): "best", defined by the highest test accuracy on D 1 ∪ D 2, and "last", defined by the test accuracy on D 1 ∪ D 2 at the end of re-training. Although the "best" criterion violates the application constraints of Sec. 1.2 (requires D 1), we include it for comparison purposes. Finally, the is computed as the highest 2 -dependent quality (line 11). Independently of the second step, another training of m p * is conducted using D 1 ∪ D 2, ing in what we term baseline accuracy. The of the experiment described in Sec. 3 are summarized in Tab. 3, and in Tab. 4 for IMM. They lead us to the following principal :Permutation-based SLTs should be used with caution We find that DP10-10, the SLT based on permutation, does not show CF for any model and dataset, which is exemplary visualized for the FC model in FIG2 which fails completely for all other SLTs. While we show this only for SLTs with two sub-tasks, and it is unknown how experiments with more sub-tasks would turn out, we nevertheless suggest caution when intepreting on permutation-based SLTs. All examined models exhibit CF While this is not surprising for FC and CONV, D-FC as proposed in BID10 performs poorly (see FIG3), as does LWTA BID25. For EWC and IMM, the story is slightly more complex and will be discussed below. EWC is mildly effective against CF for simple SLTs. Our experiments shows that EWC is effective against CF for D9-1 type SLTs, at least when the "best" evaluation criterion is used, which makes use of D 1. This, in turn, violates the application requirements of Sec. 1.2. For the "last" criterion not making use of D 1, EWC performance, though still significant, is much less impressive. We can see the origins of this difference illustrated in FIG4. Illustrating the difference between the "best" and "last" criterion for EWC. Shown is the accuracy over time for the best model on SLT D9-1c using EMNIST (a), D9-1a using EMNIST (b) and D9-1b using Devanagari (c). The blue curve measures the accuracy on D1, green only on D2 and red the D1 ∪ D2 during the training (white) and the re-training phase (gray). Additionally, the baseline (dashed line) is indicated. In all three experiments, the "best" strategy in approximately 90% accuracy, occurring at the beginning of re-training when D2 has not been learned yet. Here, the magnitude of the best/last difference is a good indicator of CF which clearly happens in (c), partly in (b) and slightly or not at all in (a).EWC is ineffective against CF for more complex problems. Tab. 3 shows that EWC cannot prevent CF for D5-5 type SLTs, see FIG6. Apparently, the EWC mechanism cannot protect all the weights relevant for D 1 here, which is likely to be connected to the fact that the number of samples in both sub-tasks is similar. This is not the case for D9-1 type tasks where EWC does better and where D 2 has about 10% of the samples in D 1. Best EWC experiments for SLT D5-5d constructed from all datasets, to be read as FIG2. We observe that CF happens for all datasets. See also Appendix B for 2D plots. IMM is effective for all SLTs but unfeasible in practice. As we can see from Tab. 4, wtIMM clearly outperforms all other models compared in Tab. 3. Especially for the D5-5 type SLTs, a modest incremental learning quality is attained, which is however quite far away from the baseline accuracy, even for MNIST-derived SLTs. This is in contrast to the reported in for MNIST: we attribute this discrepancy to the application-oriented model selection procedure using only D 1 that we perform. In contrast, in, a model with 800/800/800 neurons, for which good on MNIST are well-established, is chosen beforehand, thus arguably making implicit use of D 2. A significant problem of IMM is the determination of the balancing parameter α, exemplarily illustrated in FIG7. Our show that the optimal value cannot simply be guessed from the relative sizes of D 1 and D 2, as it is done in, but must be determined by cross-validation, thereby requiring knowledge of D 1 (violates constraints). Apart from these conceptual issues, we find that the repeated calculation of the Fisher matrices is quite time and memory-consuming (>4h and >8GB), to the point that the treatment of SLTs from certain datasets becomes impossible even on high-end machine/GPU combinations when using complex models. This is why we can evaluate IMM only for a few datasets. It is possible that this is an artifact of the TensorFlow implementation, but in the present state IMM nevertheless violates not one but two application constraints from Sec. 1.2. FIG8 and FIG9 give a visual impression of training an IMM model on D9-1 and D5-5 type SLTs, again illustrating basic feasibility, but also the variability of the "tuning curves" we use to determine the optimal balancing parameter α. Best wtIMM experiments for SLT D5-5b constructed from datasets we were able to test. The blue surfaces (epochs 0-10) represent the test accuracy during training on D1, the green surfaces the test accuracy on D2 during training on D2 (epochs 10-20). The white bars in the middle represent baseline accuracy, whereas the right part shows accuracies on D1 ∪ D2 for different α values, computed for mean-IMM (orange surfaces) and mode-IMM (red surfaces). See also Appendix B for 2D plots. The primary from the in Sec. 4 is that CF still represents a major problem when training DNNs. This is particularly true if DNN training happens under application constraints as outlined in Sec. 1.2. Some of these constraints may be relaxed depending on the concrete application: if some prior knowledge about future sub-task exists, it can be used to simplify model selection and improve . If sufficient resources are available, a subset of previously seen data may be kept in memory and thus allow a "best" type evaluation/stopping criterion for re-training, see Alg. 1.Our evaluation approach is similar to, and we adopt some measures for CF proposed there. A difference is the setting of up to 10 sub-tasks, whereas we consider only two of them since we focus less on the degree but mainly on presence or absence of CF. Although comparable both in the number of tested models and benchmarks, BID23 uses a different evaluation methodology imposing softer constraints than ours, which is strongly focused on application scenarios. This is, to our mind, the reason why those differ significantly from ours and underscores the need for a consensus of how to measure CF.In general application scenarios without prior knowledge or extra resources, however, an essential we draw from Sec. 4 is that model selection must form an integral part of training a DNN on SLTs. Thus, a wrong choice of hyper-parameters based on D 1 can be disastrous for the remaining sub-tasks, which is why application scenarios require DNN variants that do not have extreme dependencies on hyper-parameters such as layer number and layer sizes. Lastly, our findings indicate workarounds that would make EWC or IMM practicable in at least some application scenarios. If model selection is addressed, a small subset of D 1 may be kept in memory for both methods: to determine optimal values of α for IMM and to determine when to stop re-training for EWC. FIG7 shows that small changes to α do not dramatically impact final accuracy for IMM, and FIG4 indicates that accuracy loss as a function of re-training time is gradual in most cases for EWC. The inaccuracies introduced by using only a subset of D 1 would therefore not be very large for both algorithms. To conclude, this study shows that the consideration of applied scenarios significantly changes the procedures to determine CF behavior, as well as the as to its presence in latestgeneration DNN models. We propose and implement such a procedure, and as a consequence claim that CF is still very much of a problem for DNNs. More research, either on generic solutions, or on workarounds for specific situations, needs to be conducted before the CF problem can be said to be solved. A minor but important is that obtained on permutation-type SLTs should be treated with caution in future studies on CF. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. BID11. This is achieved by dividing the "best" measure from Tab. 3 by the baseline performance. Each table entry contains two numbers: the baseline performance and Ω all, and cell coloring (indicating presence or absence of CF) is performed based on Ω all. The overall picture is similar to the one from Tab. 3, as indicated by the cell coloring. A notable exception is the performance of the CONV and D-CONV models on the SVHN dataset, where Ω all shows an increase, but we do not consider this significant since the already the baseline performance is at chance level here. That is, this problem is too hard for the simple architectures we use, in which case a small fluctuation due to initial conditions will exceed baseline performance. We therefore conclude that Ω all is an important measure whenever baseline performance is better than random, in which case is it not meaningful. On the other hand, our measure works well for random baselines but is less insightful for the opposite case (as the presence of CF is not immediately observable from the raw performances. A combination of both measures might be interesting to cover both cases. Here, we present the best of all algorithms on the MNIST, EMNIST and Devanagari datasets (according to the "best" criterion) for the D9-1b SLT, and the best EWC on the D5-5d SLT (qualitatively identical to the other D5-5 type SLTs). Such 2D representations of some experimental , to be just as FIG4, may give more clear insights into the details of each experiment. Here we can observe CF behavior for all algorithms except EWC and IMM for D9-1b. We can infer that there was no discernible dependency between the occurrence of CF and particular hyperparameter settings (number and size of layers, in particular) since these are already the best experiments for each algorithm and dataset: if these show CF, this means that non of the settings we sampled were able to prevent CF. EWC shows clear CF for the Devanagari dataset, but might conceivably do better on EMNIST given a little more time for learning D 2 (this will be investigated). For D5-5d, clear CF occurs even for EWC. IMM does not exhibit CF for D9-1b (at enormous computations cost, though), and we observe that the value for the balancing parameter cannot simply be set to 0.9 respectively 0.1, as it has its argmax elsewhere. Mean-IMM;test:All Mode-IMM;test:All
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkloRs0qK7
We check DNN models for catastrophic forgetting using a new evaluation scheme that reflects typical application conditions, with surprising results.
Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data. A natural scenario arises with data created on mobile phones by the activity of their users. Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually. In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL. We present FL as a natural source of practical applications for MAML algorithms, and make the following observations. 1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm. 2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize. However, solely optimizing for the global model accuracy yields a weaker personalization . 3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim. These raise new questions for FL, MAML, and broader ML research. In recent years, the growth of machine learning applications was driven by aggregation of large amounts of data in a datacenter, where a model can be trained using large scale distributed system . Both the research community and general public are becoming increasingly aware that there is a variety of scenarios where this kind of data collection comes with significant risks, mainly related to notions of privacy and trust. In the presence of user generated data, such as activity on mobile phones, Federated Learning (FL) proposes an alternative approach for training a high quality global model without ever sending raw data to the cloud. The FL system proposed by Google selects a sample of available devices and sends them a model to be trained. The devices compute an update to the model based on an optimization procedure with locally available data, and the central system aggregates the updates from different devices. Such iteration is repeated many times until the model has converged. The users' training data does not leave their devices. The basic FL algorithm, Federated Averaging (FedAvg), has been used in production applications, for instance for next word prediction in mobile keyboard , which shows that Federated Learning can outperform the best model trained in a datacenter. Successful algorithmic extensions to the central idea include training a differential private model , compression (Konečný et al., 2016b; a), secure aggregation , and a smaller number of always-participating nodes . FL applications generally face non-i.i.d and unbalanced data available to devices, which makes it challenging to ensure good performance across different devices with a FL-trained global model. Theoretical guarantees are only available under restrictive assumptions and for convex objectives, cf. Li et al. (2019b). In this work, we are interested in personalization methods that adapt the model for data available on each device, individually. We refer to a trained global model as the initial model, and the locally adapted model as the personalized model. Existing FL personalization work directly takes a converged initial model and conducts personalization evaluation via gradient descent. However, in this approach, the training and personalization procedures are completely disconnected, which in potentially suboptimal personalized models. Meta Learning optimizes the performance after adaptation given few-shot adaptation examples on heterogeneous tasks, and has increasing applications in the context of Supervised Learning and Reinforcement Learning. Model Agnostic Meta Learning (MAML) introduced by is a solely gradient-based Meta Learning algorithm, which runs in two connected stages; metatraining and meta-testing. Meta-training learns a sensitive initial model which can conduct fast adaptation on a range of tasks, and meta-testing adapts the initial model for a particular task. Both tasks for MAML, and clients for FL, are heterogeneous. For each task in MAML and client in FL, existing algorithms use a variant of gradient descent locally, and send an overall update to a coordinator to update the global model. If we present the FL training process as meta-training in the MAML language, and the FL personalization via gradient descent as meta-testing, we show in Section 2 that FedAvg and Reptile , two popular FL and MAML algorithms, are very similar to each other; see also. In order to make FL personalization useful in practice, we propose that the following objectives must all be addressed, simultaneously. Improved Personalized Model -for a large majority of the clients. Solid Initial Model -some clients have limited or even no data for personalization. Fast Convergence -reach a high quality model in small number of training rounds. Typically, the MAML algorithms only focus on objective; that was the original motivation in. Existing FL works usually focus on objectives and, and take the personalized performance as secondary. This is largely due to the fact that it was not obvious that getting a solid initial model is feasible or practical if devices are available occasionally and with limited resources. In this work, we study these three objectives jointly, and our main contributions are: • We point out the connection between two widely used FL and MAML algorithms, and interpret existing FL algorithm in the light of existing MAML algorithms. • We propose a novel modification of FedAvg, with two stages of training and fine-tuning, for optimizing the three above objectives. • We empirically demonstrate that FedAvg is already a meta learning algorithm, optimizing for personalized performance, as opposed to quality of the global model. Furthermore, we show that the fine tuning stage enables better and more stable personalized performance. • We observe that different global models with the same accuracy, can exhibit very different capacity for personalization. • We highlight that these challenge the existing objectives in the FL literature, and motivate new problems for the broader Machine Learning research community. In this section, we highlight the similarities between the FL and MAML algorithms and interpret FedAvg as a linear combination of a naive baseline and a collection of existing MAML methods. Algorithm 1 presents a conceptual algorithm with nested structure (left column), of which the MAML meta-training algorithm, Reptile (middle column), and FL-training algorithm, FedAvg (right column), are particular instances. We assume that L is a loss function common to all of the following arguments. In each iteration, a MAML algorithm trains across a random batch of tasks {T i}. For each task T i, it conducts an inner-loop update, and aggregates gradients from each sampled task with an outer-loop update. In each training round, FL uses a random selection of clients {T i}. For each client T i and its weight w i, it runs an optimization procedure for a number of epochs over the local data, and sends the update to the server, which aggregates the updates to form a new global model. If we simplify the setting and assume all clients have the same amount of data, causing the weights w i to be identical, Reptile and FedAvg in fact become the same algorithms. Several other MAML algorithms , or other non-MAML/FL methods , can also be viewed as instances of the conceptual method in left column of Algorithm 1. function ClientU pdate(θ, Ti, β) Split local dataset into batches B θi = θ for each local epoch i from 1 to E do for batch b ∈ B do θi = θi − β∇ θ L(θi, b) end for end for Return gi = θi − θ end function Require: Clients per training round M. function ServerU pdate(θ,{gi, wi}, α) In the following, we rearrange the summands comprising the update formula of FedAvg/Reptile algorithm to reveal the connection with other existing methods -a linear combination of the Federated SGD (FedSGD) and First Order MAML (FOMAML) algorithms with different number of steps. For clarity, we assume identical weights w i in FedAvg. Consider T participating clients and let θ be parameters of the relevant model. For each client i, define its local loss function as L i (θ), and let g i j be the gradient computed in j th iteration during a local gradient-based optimization process. FedSGD was proposed as a naive baseline against which to compare FL algorithms. For each client, it simply takes a single gradient step based on the local data, which is sent back to the server. It is a sensible baseline because it is a variant of what a traditional optimization method would do if we were to collect all the data in a central location, albeit inefficient in the FL setting. That means that FedSGD optimizes the performance of the initial model, as is the usual objective in datacenter training. The local update produced by FedSGD, g F edSGD, can be written as Next, we derive the update of FOMAML in similar terms. Assuming client learning rate β, the personalized model of client i, obtained after K-step gradient update is θ. Differentiating the client update formula, we get Directly optimizing the current model for the personalized performance after locally adapting K gradient steps, in the general MAML update proposed by. MAML requires to compute 2nd-order gradients, which can be computationally expensive and creates potentially infeasible memory requirements. To avoid computing the 2nd-order term, FO-MAML simply ignores it, ing in a first-order approximation of the objective . F OM AM L(K) then uses the (K + 1) th gradient as the local update, after K gradient steps. Now we have derived the building blocks of the FedAvg. As presented in Algorithm 1, the update of FedAvg, g F edAvg, is the average of client updates, which are the sums of local gradient updates. Rearranging the terms presents its interpretation as a linear combination of the above ideas. Note that interpolating to special case, g F edSGD can be seen as g F OM AM L -optimizing the performance after 0 local updates, i.e., the current model. This sheds light onto the existing Federated Averaging algorithm, as the linear combination of algorithms optimizing personalized performance after a range of local updates. Note, however, this does not mean that FedAvg optimizes for the linear combination of the objectives of the respective algorithms. Nevertheless, we show in the following section that using K = 1 in a model hard to personalize, and increasing K significantly improves the personalization performance, up until a certain point where the performance of initial model becomes unstable. In this section, we present Personalized FedAvg algorithm, which is the of experimental adaptation of the core FedAvg algorithm to improve the three objectives proposed in the introduction. We denote F edAvg(E) the Federated Averaging method from Algorithm 1, right, run for E local epochs, weighting the updates proportionally to the amount of data available locally. We denote Reptile(K) the method from Algorithm 1, middle, run in the FL setting for K local steps, irrespective of the amount of data available locally. Based on a variety of experiments we explored, we propose Personalized FedAvg in Algorithm 2. Algorithm 2 Personalized FedAvg 1: Run F edAvg(E) with momentum SGD as server optimizer and a relatively larger E. 2: Switch to Reptile(K) with Adam as server optimizer to fine-tune the initial model. 3: Conduct personalization with the same client optimizer used during training. In general, FedAvg training with several local epochs ensures reasonably fast convergence in terms of number of communication rounds. Due to complexity of production system, this measure was studied as the proxy for convergence speed of FL algorithms. We find that this method with momentum SGD as the server optimizer already optimizes for the personalized model -objective form the introduction -while the initial model -objective -is relatively unstable. Based on prior work, the recommendation to address this problem would be to decrease E or the local learning rate, stabilizing initial model at the cost of slowing down convergence ) -objective. We propose a fine-tuning stage using Reptile(K) with small K and Adam as the server optimizer, to improve the initial model, while preserving and stabilizing the personalized model. We observed that Adam yields better than other optimizers, see Table 3 in Appendix A.1, and makes the best personalized performance achievable with broader set of hyperparameters, see Figure 2. The subsequent deployment and personalization is conducted using the same client optimizer as used for training, as we observe that this choice yields the best for FedAvg/Reptile-trained models. Experimental setup. We use the EMNIST-62 dataset as our primary benchmark (b). It is the original source of the MNIST dataset, which comes with author id, and noticable variations in style, pen width, size, etc., making it a suitable source for simulated FL experiments. The dataset contains 3400 users, each with a train/test data split, with a total of 671, 585 train and 77, 483 test images. We choose the first 2, 500 users as the initial training clients, leaving the remaining 900 clients for evaluation of personalization; these clients are not touched during training. The evaluation metrics are the initial and personalized accuracy, uniformly averaged among all of the FL-personalization clients. This is preferred to a weighted average, as in a production system we care about the future performance on each device, regardless of the amount of data available for personalization. Unless specified otherwise, we use the baseline convolutional model available in TensorFlow Federated 1, using SGD with learning rate 0.02 and batch size of 20 as the client optimizer, and SGD with momentum of 0.9 and learning rate 1.0 as the server optimizer. Each experiment was repeated 9 times with random initialization, and the mean and standard deviation of initial and personalized accuracies are reported. We also use the shakespeare dataset for next-character prediction, split in a similar manner with first 500 clients used for training and remaining 215 for evaluation of personalization. In Figure 1, left, we present the convergence of both initial and personalized model during training using the Federated Averaging algorithm. The correspond to training with E being 2 and 10, with visualization of the empirical mean and variance observed in the 9 replicas of the experiment. Detailed values about performance after 500 rounds of training, and the number of rounds to reach 80% accuracy, are provided in Table 1. These provide a number of valuable insights. First, the personalized accuracy converges significantly higher than the initial accuracy. This clearly validates the EMNIST-62 as an interesting simulated dataset to use to study Federated Learning, with significantly non-i.i.d. data available to each client. Second, it provides empirical support to the claim made in Section 2, that Federated Averaging is already a Meta Learning algorithm. The personalized accuracy not only converges faster and higher, but the are also of much smaller variance than those of the initial accuracy. Third, the personalized accuracy after training with E = 10 is significantly higher than the personalized accuracy after training with E = 2. This is despite the fact that the gap in the initial accuracy between these two variants is somewhat smaller. Moreover, the variance of personalized accuracy is nearly 3-times smaller for training with E = 10, compared to E = 2, despite the variance of the initial accuracy being smaller for the E = 2 case. This supports the insight from Equation 5, that Federated Averaging with more gradient steps locally should emphasize the personalized accuracy more, potentially at the cost of initial accuracy. Finally, this challenges the objectives in the existing literature focusing on the setting of Federated Learning. FedAvg was presented as optimizing the performance of a shared, global model, while in fact it might achieve this only as a necessary partial step towards optimizing the personalized performance. We argue that in the presence of non-i.i.d. data available to different clients, the objective of Federated Learning should also be the personalized performance. Consequently, the recommendations that in order to stabilize the convergence, one might decrease the number of local epochs or the local learning rate ), are in some scenarios misguided. In this experiment, even though the initial accuracy is very noisy and roughly constant at a suboptimal level, the personalized accuracy keeps increasing. To evaluate the convergence speed more closely, Table 1 measures the accuracies after 500 rounds of training and the average number of communication rounds where the initial and personalized accuracy first time reaches 80%. While Figure 1 shows using 5 clients per round, the table below also shows the same experiment with 20 clients per round, which in general provides even better and more stable personalized accuracy. The common pattern is that increasing E initially helps, until a certain threshold. From this experiment, E in the range of 5 − 10 seems to be the best. We conduct a similar experiment with the Shakespeare data. The is in Figure 1, improvement. We conjecture that this is due to the nature of the objective -even though the data is non-i.i.d., the next-character prediction is mostly focused on a local structure of the language in general, and is similar across all users. 2 We thus do not study this problem further in this paper. It is likely that for a next-word prediction task, personalization would make a more significant difference. In this section, we study the ability of models to personalize more closely. In particular, we look at the personalization performance as a function of the number of local epochs spent personalizing, and the effect of both local personalization optimizer and the optimizer used to train the initial model. In Figure 2, left, we study the performance of three different models, personalized using different local optimizers. The models we test here are: one randomly chosen from the models trained with E = 10 in the previous section, with initial accuracy of 74.41%. The other two models are the of fine-tuning that specific model with Reptile and Reptile and Adam as the server optimizer for further 200 communication rounds, again using 5 clients per round. For all three models, we show the of personalization using Adam with default parameters and using SGD with learning rate of 0.02 and batch size of 100. In all three cases, Adam produces reasonable personalized , but inferior to the of SGD. We tried SGD with a range of other learning rates, and in all cases we observed this value to work the best. Note that this is the same local optimizer that was used during training and fine tuning, which is similar to the MAML works, where the same algorithm is user for meta-training and meta-testing. The effect of fine-tuning is very encouraging. Using Reptile, we get a model with better initial accuracy, which also gets to a slightly improved personalized accuracy. Most importantly, it gets to roughly the same performance for a wide range of local personalization epochs. Such property is of immense value for pracical deployment, as only limited tools for model quality validation can be available on the devices where personalization happens. Using Reptile significantly improves the initial accuracy, but the personalized accuracy actually drops! Even though the Equation 5 suggests that this algorithm should not take into account the personalized performance, it is not clear that such should be intuitively expected -it does not mean it actively supresses personalization, either, and it is only fine tuning of a model already trained for the personalized performance. We note that this can be seen as an analogous observation to those of , where Reptile yields a model with clearly inferior personalized performance. Motivated by this somewhat surprising , we ask the following question: If the training data were available in a central location, and an initial model was trained using standard optimization algorithms, how would the personalization change? We train such "centralized initial model" using Adam with default settings, and evaluate the personalization performance of a model snapshot after 10 and 50 training epochs. These two models have similar initial accuracies as the two models fine tuned with Reptile and Reptile, respectively. The are in Figure 2, right. The main message is that it is significantly harder to personalize these centralized initial models. Using SGD with learning rate of 0.02 is not good -a significantly smaller learning rate is needed to prevent the models from diverging, and then the personalization improvement is relatively small. In this case, using Adam does provide a better , but still below the fine tuning performance in Figure 2, left. It is worth noting that the personalized performance of this converged model is similar to that of the model we get after fine tuning with Reptile, although using different personalization optimizer. At the moment, we are unable to suggest a sound explanation for this similarity. At the start of the Section 3, we recommended using Adam as the server optimizer for fine tuning, and here we only presented such . We did try different optimizers, and found them to yield worse with higher variance, especially in terms of the initial accuracy. See Appendix A.1 for more details, where we see that all of the optimizers can deliver higher initial accuracy at the cost of slightly lower personalized accuracy. To strengthen the above observations, we look at the distribution of the initial and personalized accuracies of fine tuned models over multiple experiments. In Figure 3, we look at the same three models as in Figure 2. It is clear that the initial model has a large variance in the initial accuracy, the are in the range of 12%, but the personalized accuracies are only within the range of 1%. We chose one model, as indicated by the arrows 3, to be fine tuned with Reptile and Reptile. In both cases, fine tuning in a more consistent in both the initial and personalized accuracy. Moreover, the best personalized accuracy with Reptile is worse than the worst personalized accuracy with Reptile. In Appendix A.2, we look at a similar visualization on a per-client basis for a given model. Studying this distribution is of great importance, as in practical deployment, even a small degradation in a user's experience might incur disproportionate cost, relative to the benefit of a comparable improvement in the model quality. We do not study this question deeper in this work, though. Finally, we look at the performance of fine tuned models discussed above on both train and test clients. Table 2 shows that if we only looked at the initial accuracy, basic ML principles would suggest the Reptile models are over-fitting, due to larger gap between the train and test clients. However, the personalized accuracy tells a different story -the gap is roughly the same for both model types, and for both train and test clients, Reptile provides significantly better personalized accuracy, suggesting we need a novel way to predict the generalization of personalized models. It this work, we argue that in the context of Federated Learning, the accuracy of the global model after personalization should be of much greater interest than it has been. Investigation of the topic reveals close similarities between the fields of Federated Learning and Model Agnostic Meta Learning, and raises new questions for these areas, as well as for the broader Machine Learning community. Challenges for Federated Learning. Framing papers in the area of Federated Learning Konečný et al., 2016a; a), formulate the objective as training of a shared global model, based on a decentralized data storage where each node / client has access to a non-i.i.d sample from the overall distribution. The objective is identical to one the broader ML community would optimize for, had all the data been available in a centralized location. We argue that in this setting, the primary objective should be the adaptation to the statistical heterogeneity present at different data nodes, and demonstrate that the popular FL algorithm, Federated Averaging, does in fact optimize the personalized performance, and while doing so, also improves the performance of the global model. Experiments we perform demonstrate that the algorithm used to train the model has major influence on its capacity to personalize. Moreover, solely optimizing the accuracy of the global model tends to have negative impact on its capacity to personalize, which further questions the correctness of the commonly presented objectives of Federated Learning. Challenges for Model Agnostic Meta Learning. The objectives in the Model Agnostic Meta Learning literature are usually only the model performance after adaptation to given task . In this work, we present the setting of Federated Learning as a good source of practical applications for MAML algorithms. However, to have impact in FL, these methods need to also consider the performance of the initial model, 4 as in practice there will be many clients without data available for personalization. In addition, the connectivity constraints in a production deployment emphasize the importance of fast convergence in terms of number of communication rounds. We suggest these objectives become the subject of MAML works, in addition to the performance after adaptation, and to consider the datasets with a natural user/client structure being established for Federated Learning (b) as the source of experiments for supervised learning. Challenges for broader Machine Learning. The empirical evaluation in this work raises a number of questions of relevance to Machine Learning research in general. In particular, Figure 2 clearly shows that models with similar initial accuracy can have very different capacity to personalize to a task of the same type as it was trained on. This observation raises obvious questions for which we currently cannot provide an answer. How does the training algorithm impact personalization ability of the trained model? Is there something we can measure that will predict the adaptability of the model? Is it something we can directly optimize for, potentially leading to novel optimization methods? These questions can relate to a gap highlighted in Table 2. While the common measures could suggest the global model is overfitting the training data, this is not true of the personalized model. Transfer Learning is another technique for which our could inspire a novel solution. It is very common for machine learning practitioners to take a trained model from the research community, replace the final layer with a different output class of interest, and retrain for the new task . We conjecture that the algorithms proposed in the FL and MAML communities, could yield base models for which this kind of domain adaptation would yield better . Finally, we believe that a systematic analysis of optimization algorithms of the inner-outer structure presented in Algorithm 1 could provide novel insights into the connections between optimization and generalization. Apart from the FL and MAML algorithms, recently proposed a method that can be interpreted as outer optimizer in the general algorithm, which improves the stability of a variety of existing optimization methods used as the inner optimizer. A APPENDIX This Appendix contains further details referenced from the main body of the paper. Table 3 summarizes the attempts at fine tuning the model user in main body with different server optimizers. We see that comparing the same client optimizers, Adam consistently provides better and more stable in terms of initial accuracy. A.2 PER-CLIENT PERSONALIZATION Figure 4 visualizes the distribution of initial and personalized accuracies on a per-client basis. Each dot represents a random sample of the test clients used for personalization experiments. Studying this distribution is of great importance, as in practical deployment, degrading a user's experience might incur disproportionate cost, compared to the benefit of comparable improvement. Designing methods that robustly identify the clients below the diagonal line and at least revert to the initial model is worth of future investigation.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkeaEyBYDB
Federated Averaging already is a Meta Learning algorithm, while datacenter-trained methods are significantly harder to personalize.
Memorization of data in deep neural networks has become a subject of significant research interest. In this paper, we link memorization of images in deep convolutional autoencoders to downsampling through strided convolution. To analyze this mechanism in a simpler setting, we train linear convolutional autoencoders and show that linear combinations of training data are stored as eigenvectors in the linear operator corresponding to the network when downsampling is used. On the other hand, networks without downsampling do not memorize training data. We provide further evidence that the same effect happens in nonlinear networks. Moreover, downsampling in nonlinear networks causes the model to not only memorize just linear combinations of images, but individual training images. Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important phenomenon of memorization in over-parameterized deep networks. As deep convolutional neural networks (CNNs) become ubiquitous in computer vision due to their applicability and strong performance on a range of tasks BID6, recent work has begun analyzing the memorization properties of such networks in classification. For example, BID19 show that popular CNNs can achieve almost zero training error on randomly labeled datasets, indicating that CNNs have the capacity to "memorize" large training data sets. BID0 and BID15 build on the experiments from BID19 to better understand and evaluate the extent to which CNNs memorize training data. BID0 show that CNNs, when trained on large datasets, are able to learn patterns from realistic data before memorizing training images. BID15 present experiments on "membership inference" (i.e. determining whether an image was used during training) and conclude that modern architectures are capable of "remember[ing] a large number of images and distinguish [ing] them from unseen images".Although the above methods analyze memorization in the classification setting, they do not provide a mechanism through which memorization of training data occurs. We here present downsampling as one mechanism by which deep CNNs memorize specific training images. We will focus our study on the memorization properties of linear and nonlinear fully convolutional autoencoders. The architectures we use (such as U-Net, BID14) are commonly employed in imageto-image tasks, see e.g. BID17. However, we will use these architectures only in the autoencoding framework. We primarily focus on autoencoders BID1 for the following reasons: components of convolutional autoencoders are building blocks of many CNNs; and layerwise pre-training using autoencoders is a technique to initialize individual layers of CNNs to improve training BID3, BID4 ). It is important to note that there are many potential solutions to the autoencoding problem when using over-parameterized autoencoders. In particular, in the linear case, these models may range from learning the (full rank) identity function (which has 0 error in the autoencoding task) to low rank solutions where each training example corresponds to an eigenvector with eigenvalue 1. Thus, understanding how autoencoders learn is of interest in order to gain insights into how deep CNNs memorize training data. Figures 1a and 1b provide two examples of memorization: A typical U-Net architecture (the same as e.g. used in BID17 for large hole impainting) when trained on a single image "memorizes" the training image in the sense that for any input, the output always contains the training image (even if the input is random noise or an arbitrary white square). This paper provides a mechanism for this phenomenon. The outline is as follows: After introducing some notation in Section 2, we will show in Section 3 that memorization is tightly coupled with downsampling and also occurs in the simpler setting of linear autoencoding CNNs. In the linear setting, the neural network corresponds to matrix multiplication. In Section 4, we show how to extract this matrix representation and we provide our main conjecture, namely that linear combinations of the training images are stored as eigenvectors of this matrix, whose rank is given by the dimension of the span of the training set. We also provide strong evidence for this conjecture on 2 × 2 images. In Section 5, we analyze the eigenvalue decay and show in various examples that using downsampling linear CNNs, linear combinations of the training examples are stored as eigenvectors with eigenvalues close to 1. Finally, we return to the nonlinear setting in Section 6, providing evidence that memorization is an even stronger phenomenon in nonlinear networks, since the actual training images (in contrast to linear combinations of training images) are memorized. We end with a short discussion in Section 7. In this section, we introduce the mathematical framework for our work and highlight two different functions learned by autoencoding CNNs, namely the identity function and the point map. We denote a training set of n square images by X = {x 1, x 2, ..., x n}, where each x i ∈ R c×s×s with c being the number of color channels (or filter channels) and s denoting the width and height of an image. We will focus on convolutional autoencoders, i.e., CNNs trained to map between the same image. In particular, we will consider linear CNNs, by which we mean convolutional autoencoders with layers being either nearest neighbor upsampling or convolutional with kernel size 3, zero padding, no activation functions, and no biases. To simplify the computations in Section 4, we assume throughout that the input image as well as the stride size are a power of 2.We denote the function learned by a CNN on the training set X by C X: R c×s×s → R c×s×s. The training procedure minimizes a given loss function between the input image and its reconstruction by C X. We use the mean squared error loss throughout; thus the loss function is given by DISPLAYFORM0 2, where subtraction and exponentiation are taken elementwise. For linear CNNs, we denote the matrix corresponding to C X by A X. Denoting the vectorized (or "flattened") version of an image y ∈ R c×s×s by y f ∈ R cs 2, then A X satisfies A X y f = (C X (y)) f.In this work, we identify and analyze an architectural mechanism that is able to fundamentally alter the function C X learned by an autoencoding CNN on a given training set. The following two functions will play an important role in the subsequent analysis: the identity function given by C X (y) = y for any y ∈ R c×s×s and the point map given by C X (y) = C X (x 0) for any y ∈ R c×s×s, where x 0 is a particular element in span C X (X) where span C X (X) = span {C X (x i), 1 ≤ i ≤ n}. An extreme form of memorization occurs when a CNN learns the point map on a training set of size one, i.e., it maps any image to the same fixed image. In this section, we will illustrate how downsampling acts as a mechanism for learning the point map in deep CNNs. In particular, we will show that even if a downsampling network has the capacity to learn the identity function, it prefers to learn the point map. We consider the linear CNNs defined by Network ND (non-downsampling) and Network D (downsampling) in FIG2. Both networks employ 128 filters in each convolutional layer in all but the last layer, which contains 3 filters. The two networks only differ by the property that Network D uses filters of stride 8 in the first layer to downsample the image and then later uses nearest neighbor upsampling with scale factor 8 to rescale the image, while Network ND does not perform any downsampling. In concordance with the severe downsampling from 224 × 224 to 7 × 7 images performed by ResNet and VGG BID9; BID16 ), Network D downsamples from 32 × 32 to 4 × 4 images. However, we will show in Section 5 that the same memorization phenomenon is also observed in networks that only downsample using multiple convolutional layers of stride 2 as is done in U-Nets BID14 ). In the examples, the networks were initialized using the default in the PyTorch deep learning library BID13, namely each layer's weights were drawn i.i.d. from a uniform distribution U(−a, a) with a = 1/ √ P where P is the number of parameters in that layer. Both networks were trained for 2000 iteration on a single 3 × 32 × 32 image from CIFAR10 BID12 ) of a frog. Network ND reconstructed the digit 3 with a training loss of 10 −4 and Network ND with loss 10 −2. We then applied the trained networks to test images consisting of both realistic images and random images with each pixel in the image being drawn from the standard normal distribution. As shown in FIG2 and 2b, although trained on the same input for the same number of iterations Network D (with downsamling) learned the point map while Network ND (no downsampling) learned the identity function up to sign. The possible sign flip is a consequence of using no biases. In fact, this contrasting behavior is very robust throughout the training process. In FIG4, we see that regardless of when we stop training, even after only 10 iterations, Network D learns a point map, while Network ND learns a function visually similar to the identity map. One may conjecture that the memorization phenomenon that occurs using Network D could be due to the small number of filters in the downsampling layer, so that not all the features from the input could be preserved and the network would not be capable of learning a full rank identity map. This is not the case. To demonstrate that this is in general not the reason why downsampling networks learn a point map, we next present a downsampling network which, as we show, has the capacity to learn the identity function, but still prefers to learn the point map instead as the of optimization. Example. Consider the linear CNN defined by the small downsampling Network DS presented in FIG22, which is trained on a single 1×2×2 image for 2000 iterations. Figure 4b shows that there exists an initialization such that the network learns the identity function. A full description of how we computed this manual initialization is provided in Appendix A. In contrast, as shown in Figure 4c initializing Network DS with the default PyTorch uniform distribution in the point map. This provides the first example showing that even though the network has the capacity to learn the identity function, it prefers to learn a point map using the default PyTorch initialization. We observed the same effects using the Xavier uniform initialization, which is based on the assumption that activations are linear BID5 ). However, the are not observed for linear networks when using Kaiming initialization, which is expected since this initialization is meant for nonlinear networks with ReLU/PReLU activations BID8 ). In Appendix E, we show that nonlinear networks memorize training images under any of these initializations. We now turn to extracting and analyzing the linear operator that describes the function learned by the linear CNN. While the full algorithms for extracting the matrix A X from the network are described in Appendix B, we here provide intuition for this procedure. Since the matrix A X can be decomposed as a product of matrices, one for each layer, it suffices to provide intuition for converting a general convolutional layer and a general upsampling layer into a matrix. We then discuss our main conjecture and proposition linking the eigenvectors of A X to the mechanism of memorization. To simplify notation, we assume that the inputs and outputs are already zero padded images (with 1 layer of zero padding on each side), i.e., the original images lie in R c×(s−2)×(s−2) such that the images after vectorization and zero padding lie in R stride size t, kernel volume f × 3 × 3, and 1 layer of zero padding, which operates on an incoming zero padded vectorized image of f s 2 voxels, where f is the depth and s the width and height of the incoming image. Assuming that the stride and image sizes are powers of 2, it follows that (s −2) is a multiple of t. Hence the matrix corresponding to the operator of this particular layer is of size f (DISPLAYFORM0 t + 2 corresponds to the width and height of the output after striding and zero padding. The matrix itself is circulant-like, where the zero padding must be carefully accounted for by additional shifts and rows of zeros. A particular example is shown in Appendix B.Next, we provide intuition on how to extract the matrix corresponding to the nearest neighbor upsampling operation. The full algorithm together with an example is given in Appendix B. Given a vectorized zero-padded image of f s 2 voxels as input, then nearest neighbor upsampling with scale factor k corresponds to a matrix of size f (k(s − 2) + 2) 2 × f s 2, where (k(s − 2) + 2) comes from having to scale only the non-zero-padded elements of the representation and adding 2 for the new zero padding. This matrix is composed of blocks of circulant-like matrices consisting of one-hot vectors, where blocks of s identical rows are shifted by 1 element. Finally, the full operator A X for a linear network C X is obtained by multiplying the matrices corresponding to each layer. Being able to extract the linear operator A X from a linear CNN is critical for analyzing the mechanism of memorization, since it allows an examination of its eigenvalues and eigenvectors. Note that A X operates on the space of c × s × s images. Hence every eigenvector (when stripped of zero-padding and reshaped to c × s × s) represents an image in this space. However, since A X is in general not symmetric, eigenvectors can be complex and hence the real and imaginary components of the eigenvectors represent separate images in the original space. We can easily obtain a bound on the rank of A X: Since A X is given by a product of matrices, one for each layer in the linear CNN, the rank of A X is at most equal to the rank of the minimal rank matrix in the factorization of A X. We denote this number by r min. We now present our main conjecture. Conjecture. Let X = {x 1, . . ., x n}, x i ∈ R c×s×s, be a training set of images of size c × s × s.(a) If A X is the linear operator of a downsampling network trained to zero loss, then DISPLAYFORM0 where all eigenvalues are equal to 1 and the corresponding eigenvectors are linear combinations of the input training images.(b) If A X is the linear operator of a non-downsampling network trained to zero loss, then DISPLAYFORM1. In other words, we conjecture that linear downsampling networks learn a low rank solution when the number of linearly independent training examples is not sufficiently large (i.e. larger than cs 2), even when they have the capacity to learn the identity function. In particular, we conjecture that the rank is given by the dimension of the span of the training images. Most importantly, we conjecture that the mechanism of memorization in downsampling networks is by storing linear combinations of the training images as eigenvectors of the linear operator. On the other hand, we conjecture that linear non-downsampling networks learn a much higher rank solution, thereby often rendering the solution visually indistinguishable from the identity function. Our conjecture also implies that when training a linear downsampling CNN on images of size 3 · 224 · 224, which corresponds to the input image size for VGG and ResNet BID9, BID16 ), the number of linearly independent training examples needs to be at least 3 · 224 · 224 = 153, 228 before the network can learn the identity function. However, assuming that realistic images lie on a low-dimensional manifold, this conjecture indicates that a linear downsampling network will learn a basis for the low-dimensional manifold instead of the identity when trained on sufficiently many realistic images. Before providing empirical evidence for our conjecture in Section 5, we end this section by summarizing our theoretical evidence in the following proposition showing that the network in FIG22 can learn solutions of all ranks 1 through 4 for 1 × 2 × 2 images depending on the dimension of the span of the training set. Proposition. The linear Network DS presented in FIG22 can learn a linear operator of rank min (dim(span (X), 4)) with 1 ≤ dim(span (X)) ≤ 4.The proof is given in Appendix C. Note that this proposition does not imply that A X must obtain rank r = min (dim (span (X), 4)) for all training sets X on 1 × 2 × 2 images, but rather that the conjecture holds for any example of training sets that we tried, in particular also training sets covering each possible nonzero value of min (dim (span (X), 4)). We now provide empirical evidence for our main conjecture by analyzing the operator A X for networks trained on CIFAR10 color images (which have width and height 32). First, we show that when training on one or two images respectively, a downsampling network learns an operator that has very low effective rank with the top eigenvector(s) corresponding to the training image(s). On the other hand, we show that a non-downsampling network trained on the same data learns an operator with much higher rank, closer to the full dimensionality of the data. Finally, we will train downsampling networks on tens to thousands of images from a single class of CIFAR10 and present the rank of the learned solutions in Appendix D.All of our downsampling networks downsample using 3 convolutional layers with stride 2 and then upsample using a 1 nearest neighbor upsampling layer with scale factor 8. The general architecture is shown in FIG6. Importantly, we note that if we use such a network on color CIFAR10 images, we need at least b = 3·8·8 = 192 filters in order to be able to learn the identity function since otherwise r min < 3 · 32 · 32. Our non-downsampling networks have the same convolutional layer scheme as the downsampling networks. We denote a downsampling (respectively non-downsampling) network with a filters on upsampled representations and b filters on downsampled representations by D a,b (respectively N D a,b).The networks are initialized using the default initialization in PyTorch BID13 ), trained using the Adam optimizer BID11 ) with learning rate 10 −4, and trained until the loss decreased by less than 10 −4 or for 20, 000 epochs. Note that we disregard overfitting in training, since we expect an overfitted autoencoder to learn the identity function. As pointed out in FIG4, the memorization effect is robust throughout the training process. However, in this case, we wish to train until the training error is sufficiently low (i.e. around 10 −3) so that we can understand the rank of the operator in relation to our conjecture. FIG8 shows the for various downsampling and non-downsampling networks when trained on 1 ≤ k ≤ 2 images. As shown in FIG8, in both cases Network D 64,192 stores the k images (up to a shift in the color channels) as eigenvectors with eigenvalues close to 1 and produces roughly a rank k solution. In particular, when trained on a single picture of a plane from CIFAR10 this network stores the training image as the real component of the top eigenvector; it is an inverted color image of the plane and the rank of the learned solution is approximately 1 FIG8. When trained on a picture of the plane and a car, the top two eigenvectors are linear combinations of the plane and the car with the plane having inverted colors FIG8 ). In fact, the plane is more visible in the complex component of the second eigenvector while the car is more visible in the real components. The learned solution is of rank approximately 2 with the top two eigenvalues being complex conjugates (hence having the same magnitude). On the other hand, FIG8 show that training the non-downsampling network N D 4,4 on a single image or on two images in a higher-rank solution, with rank and spectrum being independent of k. When training on k > 2 images from different classes of CIFAR10, our conjecture stipulates that the rank of the learned solution indicates the dimension of the span of the training set. Since visu- alization of the eigenvectors when trained on k > 2 images is not as insightful (it becomes difficult to visually identify the training images in the linear combination), the corresponding visualizations and eigenvalue plots are moved to Appendix D. In this section, we analyze the phenomenon of memorization for non-linear networks. In the following experiments, we use the linear CNNs D a,b and N D a,b as the backbone architecture but with LeakyReLU activations BID18 after every convolutional layer. We denote the - For FIG12 we trained the non-linear downsampling network N LD 128,128 on a set of 10 images, one from each class of CIFAR10. Interestingly, when fed in a new image from CIFAR10, Gaussian noise or artificial white squares, the output was always one of the training images. This suggests that the effect of memorization is even stronger for non-linear downsampling networks, since they output specific training examples as compared to linear combinations of training examples, which we had observed for linear downsampling networks. In other words, the individual training examples act as strongly attracting fixed points for non-linear downsampling networks. We end this section with a remark on the impact of different initializations on training. While the experiments shown in FIG10 were initialized in the default uniform distribution from PyTorch, Appendix E shows with different initializations. In particular, we show that memorization still occurs when training N LD a,b with either Xavier or Kaiming uniform/normal initializations. This paper identified downsampling as a mechanism through which linear CNNs memorize training images. We demonstrated that downsampling convolutional autoencoders memorize training images in both the linear and nonlinear setting. In particular, we showed that it is not just the dimensionality reduction of downsampling that causes these models to learn point maps by demonstrating that a downsampling CNN architecture with the capacity to learn the identity function still prefers the point map. In the linear case, this preference for low-rank over the equally valid high-rank solutions is highly suggestive of similar phenomena observed in problems such as matrix completion (e.g., Gunasekar et al.).In the non-linear case, memorization in downsampling networks is manifested even more strikingly with nearly arbitrary input images being mapped to output images that are visually identifiable as one of the training images. While the exact mechanism still needs to be explored, this is reminiscent of FastICA in Independent Component Analysis BID10 or more general non-linear eigen-problems BID2, where every "eigenvector" for certain iterative maps has its own basin of attraction. On the other hand, non-downsampling auto-encoders do not memorize the training data and consistently learn a "high rank" map, similar to the identity map, at least visually. We conjecture that our findings will help to shed light on the strong generalization properties of downsampling networks for image classification and recognition tasks. Indeed, if downsampling networks memorize images or linear combinations of images, when trained on large datasets, they may be capable of learning representations within the space of all realisitic images instead of learning the standard full rank basis. We conclude with a mention of further areas of exploration spurred on by our work. We still need to understand why downsampling forces the network to learn low rank solutions even when the network has the capacity to learn the identity. This requires developing a better grasp of optimization and initialization, starting with linear autoencoders and proceeding to the non-linear settings. Finally, we need to explore connections between our conjecture and the manifold hypothesis to better understand the space of realistic images. Here we provide the initialization of the identity function for the network from FIG22. The procedure is as follows (the positions are 0 indexed):1. There are 4 filters in the first layer each of size and with stride 2. Filter 1 should have indices set to 1. Filter 2 should have indices set to 1. Filter 3 should have indices set to 1. Filter 4 should have indices set to 1.2. Now nearest neighbor upsampling will be applied to each of the outputs. 6. The last filter is of size. This filter should have indices (i, 1, 1) set to 1 for 0 ≤ i ≤ 3. In this section, we present algorithms for converting convolutional layers and nearest neighbor upsampling layers into matrices. We first present how to construct a block of this matrix for a single filter in Algorithm 1. To construct a matrix for multiple filters, one need only apply the provided algorithm to construct separate matrix blocks for each filter and then concatenate them. We now provide an example of how to convert the first layer from the network in FIG22 into a single matrix for 1 × 2 × 2 images. First suppose we have a zero padded 1 × 2 × 2 matrix as input, which is shown vectorized to the right: DISPLAYFORM0 Now we apply the first convolutional layer to this 16 × 1 vector. We have 4 convolutional filters with parameter C (j) i denote the ith parameter (read in row-major order) from filter j for 1 ≤ i ≤ 9 and 1 ≤ j ≤ 4. The ing convolutional matrix is of size 4 · 3 · 3 × 1 · 4 · 4, but here we show just the first 9 columns (i.e. the pattern for the ith filter): DISPLAYFORM1 DISPLAYFORM2 We similarly create matrices for the other convolutional layers. Next we present how to construct the rows of such a matrix in Algorithm 2. for f ilterIndex ← 0 to f − 1 do 6:for kernelIndex ← 0 to 8 do 7:rowIndex ← kernelIndex mod 3 + paddedSize C ← zeros matrix of size ((resized + 2) 2, f · paddedSize 2 )12:index ← resized + 2 + 1 for shif t ← 0 to resized − 1 do nextBlock ← zeros matrix of size (resized, f · paddedSize 2)15: DISPLAYFORM0 for rowShif t ← 1 to resized − 1 do return C 24: end functionWe now provide the upsampling matrix for an upsampling layer with scale factor 2 operating on a vectorized zero padded 1 × 1 × 1 image: DISPLAYFORM1 DISPLAYFORM2 We now give an example for the network from FIG4 when it is applied to vectorized zero padded 1 × 2 × 2 input images. Suppose that the ith convolutional layer can be written as a matrix A i and the upsampling layer is written as a matrix U. Then we have the following matrix factorization for our network:. In each of the above examples, we have thus demonstrated that when training on k linearly independent examples, A X obtains rank k. DISPLAYFORM3 Now we show that when X's span has dimension k, then A X has rank k. To do this, we will consider a selection of k basis elements from the above 4 matrices, and then show that introducing linear combinations of these basis elements into X does not affect the rank of A X.Namely, we consider selecting the training set: DISPLAYFORM4 which has dimension 3. After training, the ing eigenvalues are 1, 1, 1, −0.005. The corresponding top 3 eigenvectors are:. Hence we again see that the rank of A X is given by the dimension of the span of the training set. In this section, we examine the operator of D 64,128 when trained using a larger dataset. Suppose we first train network D 64,128 for 20, 000 iterations using 10 examples, 1 from each class of CIFAR10 with training examples shown in FIG21. Network D 64,128 achieves an MSE of 0.0046 after 20, 000 iterations. In FIG21, there are 10 eigenvalues with magnitude close to 1. These top ten eigenvalues are the following: FIG21 presents the real and imaginary components of 64 eigenvectors corresponding to the top 64 eigenvalues (the grids should be read in row-major order). However, as the eigenvectors are now linear combinations of the training input, it is much harder to decipher exactly how the training examples are presented in the first 10 eigenvectors. However, we can clearly see that some of the training examples are present in these eigenvectors. For example, the real components of eigenvectors 1, 2, and 3 contain the bird training example, but with inverted colors. Similarly, the real components of eigenvectors 4 and 5 contain the the boat example again with inverted colors; the real components of eigenvectors 6 and 7 contain the frog example; the real components of eigenvectors 8 and 9 contain the car example. Next we consider the rank of the learned operators when training on thousands of examples from a single class of CIFAR10. In particular, we trained using 2000 dogs, 5000 dogs, or 5000 planes from CIFAR10's training set. If our conjecture holds, then the ing solution should be of rank equal to the dimension of the span of each of these training sets. Note that the rank of the learned solution should be less than 3072 even though we are training on greater than 3072 images because it is unlikely that there are 3072 linearly independent images in each of the sets. The training losses on each of these datasets is shown in Table 1 and the corresponding eigenvalue and eigenvector plots are shown in FIG1.From the plot of eigenvalues, it appears that the rank of the ing matrix for D 64,192 on 2000 dogs, 5000 dogs, and 5000 planes is around 500, 900, 750 respectively. However, it could be the case that the remaining eigenvalues are providing small nonzero contributions to the learned solution. To rectify this issue, we took the SVD of the ing operator and zeroed out the lowest singular values and compared the MSE of the reconstructions using the ing operator against those of the original. In particular, we compared the MSE of the reconstructions (after min-max scaling) to the training set used. The are summarized in the Table 1.From this table it is evident that only the top 500, 900, and 750 components were necessary to achieve low training error. Assuming our conjecture holds, this would indicate that these values are approximations for the dimension of the span of the corresponding training sets. We note that this is an approximation because the true dimension would be given when the training error approaches 0. The training errors here are of the order of 10 −3, thereby justifying such approximations. In FIG1, we present the effect of different initializations on learning the point map in a nonlinear downsampling network (N LD 128, 128 LD 128, 128) learns the point map regardless of the initialization used. However, we note that for Kaiming intializations, the ing reconstruction after feeding in new inputs is a blurred version of the training image.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByGUFsAqYm
We identify downsampling as a mechansim for memorization in convolutional autoencoders.
Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose AIRL, a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation that is competitive with direct imitation learning algorithms. Additionally, we show that AIRL is able to recover portable reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. While reinforcement learning (RL) provides a powerful framework for automating decision making and control, significant engineering of elements such as features and reward functions has typically been required for good practical performance. In recent years, deep reinforcement learning has alleviated the need for feature engineering for policies and value functions, and has shown promising on a range of complex tasks, from vision-based robotic control BID12 to video games such as Atari BID13 and Minecraft BID16. However, reward engineering remains a significant barrier to applying reinforcement learning in practice. In some domains, this may be difficult to specify (for example, encouraging "socially acceptable" behavior), and in others, a naïvely specified reward function can produce unintended behavior BID2. Moreover, deep RL algorithms are often sensitive to factors such as reward sparsity and magnitude, making well performing reward functions particularly difficult to engineer. Inverse reinforcement learning (IRL) BID19 BID14 refers to the problem of inferring an expert's reward function from demonstrations, which is a potential method for solving the problem of reward engineering. However, inverse reinforcement learning methods have generally been less efficient than direct methods for learning from demonstration such as imitation learning BID10, and methods using powerful function approximators such as neural networks have required tricks such as domain-specific regularization and operate inefficiently over whole trajectories BID6. There are many scenarios where IRL may be preferred over direct imitation learning, such as re-optimizing a reward in novel environments BID7 or to infer an agent's intentions, but IRL methods have not been shown to scale to the same complexity of tasks as direct imitation learning. However, adversarial IRL methods BID6 a) hold promise for tackling difficult tasks due to the ability to adapt training samples to improve learning efficiency. Part of the challenge is that IRL is an ill-defined problem, since there are many optimal policies that can explain a set of demonstrations, and many rewards that can explain an optimal policy BID15. The maximum entropy (MaxEnt) IRL framework introduced by BID24 handles the former ambiguity, but the latter ambiguity means that IRL algorithms have difficulty distinguishing the true reward functions from those shaped by the environment dynamics. While shaped rewards can increase learning speed in the original training environment, when the reward is deployed at test-time on environments with varying dynamics, it may no longer produce optimal behavior, as we discuss in Sec. 5. To address this issue, we discuss how to modify IRL algorithms to learn rewards that are invariant to changing dynamics, which we refer to as disentangled rewards. In this paper, we propose adversarial inverse reinforcement learning (AIRL), an inverse reinforcement learning algorithm based on adversarial learning. Our algorithm provides for simultaneous learning of the reward function and value function, which enables us to both make use of the efficient adversarial formulation and recover a generalizable and portable reward function, in contrast to prior works that either do not recover a reward functions BID10, or operates at the level of entire trajectories, making it difficult to apply to more complex problem settings BID6 a). Our experimental evaluation demonstrates that AIRL outperforms prior IRL methods BID6 on continuous, high-dimensional tasks with unknown dynamics by a wide margin. When compared to GAIL BID10, which does not attempt to directly recover rewards, our method achieves comparable on tasks that do not require transfer. However, on tasks where there is considerable variability in the environment from the demonstration setting, GAIL and other IRL methods fail to generalize. In these settings, our approach, which can effectively disentangle the goals of the expert from the dynamics of the environment, achieves superior . Inverse reinforcement learning (IRL) is a form of imitation learning and learning from demonstration BID3. Imitation learning methods seek to learn policies from expert demonstrations, and IRL methods accomplish this by first inferring the expert's reward function. Previous IRL approaches have included maximum margin approaches BID0 BID18, and probabilistic approaches such as BID24; BID4. In this work, we work under the maximum causal IRL framework of BID23. Some advantages of this framework are that it removes ambiguity between demonstrations and the expert policy, and allows us to cast the reward learning problem as a maximum likelihood problem, connecting IRL to generative model training. Our proposed method most closely resembles the algorithms proposed by BID21; BID10 BID5. Generative adversarial imitation learning (GAIL) BID10 differs from our work in that it is not an IRL algorithm that seeks to recover reward functions. The critic or discriminator of GAIL is unsuitable as a reward since, at optimality, it outputs 0.5 uniformly across all states and actions. Instead, GAIL aims only to recover the expert's policy, which is a less portable representation for transfer. BID21 does not interleave policy optimization with reward learning within an adversarial framework. Improving a policy within an adversarial framework corresponds to training an amortized sampler for an energy-based model, and prior work has shown this is crucial for performance BID6. BID22 also consider learning cost functions with neural networks, but only evaluate on simple domains where analytically solving the problem with value iteration is tractable. Previous methods which aim to learn nonlinear cost functions have used boosting BID17 and Gaussian processes BID11, but still suffer from the feature engineering problem. Our IRL algorithm builds on the adversarial IRL framework proposed by BID5, with the discriminator corresponding to an odds ratio between the policy and exponentiated reward distribution. The discussion in BID5 is theoretical, and to our knowledge no prior work has reported a practical implementation of this method. Our experiments show that direct implementation of the proposed algorithm is ineffective, due to high variance from operating over entire trajectories. While it is straightforward to extend the algorithm to single state-action pairs, as we discuss in Section 4, a simple unrestricted form of the discriminator is susceptible to the reward ambiguity described in BID15, making learning the portable reward functions difficult. As illustrated in our experiments, this greatly limits the generalization capability of the method: the learned reward functions are not robust to environment changes, and it is difficult to use the algo-rithm for the purpose of inferring the intentions of agents. We discuss how to overcome this issue in Section 5. BID1 consider learning reward functions which generalize to new tasks given multiple training tasks. Our work instead focuses on how to achieve generalization within the standard IRL formulation. Our inverse reinforcement learning method builds on the maximum causal entropy IRL framework BID23, which considers an entropy-regularized Markov decision process (MDP), defined by the tuple (S, A, T, r, γ, ρ 0). S, A are the state and action spaces, respectively, γ ∈ is the discount factor. The dynamics or transition distribution T (s |a, s), the initial state distribution ρ 0 (s), and the reward function r(s, a) are unknown in the standard reinforcement learning setup and can only be queried through interaction with the MDP.The goal of (forward) reinforcement learning is to find the optimal policy π * that maximizes the expected entropy-regularized discounted reward, under π, T, and ρ 0: DISPLAYFORM0 where τ = (s 0, a 0, ...s T, a T) denotes a sequence of states and actions induced by the policy and dynamics. It can be shown that the trajectory distribution induced by the optimal policy π * (a|s) takes the form π * (a|s) ∝ exp{Q * soft (s t, a t)} BID23 BID9, where DISPLAYFORM1 Inverse reinforcement learning instead seeks infer the reward function r(s, a) given a set of demonstrations D = {τ 1, ..., τ N}. In IRL, we assume the demonstrations are drawn from an optimal policy π * (a|s). We can interpret the IRL problem as solving the maximum likelihood problem: DISPLAYFORM2 Where st,at) parametrizes the reward function r θ (s, a) but fixes the dynamics and initial state distribution to that of the MDP. Note that under deterministic dynamics, this simplifies to an energy-based model where for feasible trajectories, p θ (τ) ∝ e T t=0 γ t r θ (st,at) BID24. BID5 propose to cast optimization of Eqn. 1 as a GAN BID8 optimization problem. They operate in a trajectory-centric formulation, where the discriminator takes on a particular form (f θ (τ) is a learned function; π(τ) is precomputed and its value "filled in"): DISPLAYFORM3 DISPLAYFORM4 and the policy π is trained to maximize DISPLAYFORM5 Updating the discriminator can be viewed as updating the reward function, and updating the policy can be viewed as improving the sampling distribution used to estimate the partition function. If trained to optimality, it can be shown that an optimal reward function can be extracted from the optimal discriminator as f * (τ) = R * (τ)+const, and π recovers the optimal policy. We refer to this formulation as generative adversarial network guided cost learning (GAN-GCL) to discriminate it from guided cost learning (GCL) BID5. This formulation shares similarities with GAIL BID10, but GAIL does not place special structure on the discriminator, so the reward cannot be recovered. In practice, using full trajectories as proposed by GAN-GCL can in high variance estimates as compared to using single state, action pairs, and our experimental show that this in very poor learning. We could instead propose a straightforward conversion of Eqn. 2 into the single state and action case, where: DISPLAYFORM0.As in the trajectory-centric case, we can show that, at optimality, f * (s, a) = log π * (a|s) = A * (s, a), the advantage function of the optimal policy. We justify this, as well as a proof that this algorithm solves the IRL problem in Appendix A.This change in an efficient algorithm for imitation learning. However, it is less desirable for the purpose of reward learning. While the advantage is a valid optimal reward function, it is a heavily entangled reward, as it supervises each action based on the action of the optimal policy for the training MDP. Based on the analysis in the following Sec. 5, we cannot guarantee that this reward will be robust to changes in environment dynamics. In our experiments we demonstrate several cases where this reward simply encourages mimicking the expert policy π *, and fails to produce desirable behavior even when changes to the environment are made. We now discuss why IRL methods can fail to learn robust reward functions. First, we review the concept of reward shaping. BID15 describe a class of reward transformations that preserve the optimal policy. Their main theoretical is that under the following reward transformation, DISPLAYFORM0 the optimal policy remains unchanged, for any function Φ: S → R. Moreover, without prior knowledge of the dynamics, this is the only class of reward transformations that exhibits policy invariance. Because IRL methods only infer rewards from demonstrations given from an optimal agent, they cannot in general disambiguate between reward functions within this class of transformations, unless the class of learnable reward functions is restricted. We argue that shaped reward functions may not be robust to changes in dynamics. We formalize this notion by studying policy invariance in two MDPs M, M which share the same reward and differ only in the dynamics, denoted as T and T, respectively. Suppose an IRL algorithm recovers a shaped, policy invariant rewardr(s, a, s) under MDP M where Φ = 0. Then, there exists MDP pairs M, M where changing the transition model from T to T breaks policy invariance on MDP M. As a simple example, consider deterministic dynamics T (s, a) → s and state-action rewardsr(s, a) = r(s, a) + γΦ(T (s, a)) − Φ(s). It is easy to see that changing the dynamics T to T such that T (s, a) = T (s, a) means thatr(s, a) no longer lies in the equivalence class of Eqn. 3 for M. First, let the notation Q * r,T (s, a) denote the optimal Q-function with respect to a reward function r and dynamics T, and π * r,T (a|s) denote the same for policies. We first define our notion of a "disentangled" reward. Definition 5.1 (Disentangled Rewards). A reward function r (s, a, s) is (perfectly) disentangled with respect to a ground-truth reward r(s, a, s) and a set of dynamics T such that under all dynamics T ∈ T, the optimal policy is the same: π * r,T (a|s) = π * r,T (a|s)We could also expand this definition to include a notion of suboptimality. However, we leave this direction to future work. Under maximum causal entropy RL, the following condition is equivalent to two optimal policies being equal, since Q-functions and policies are equivalent representations (up to arbitrary functions of state f (s)): DISPLAYFORM0 To remove unwanted reward shaping with arbitrary reward function classes, the learned reward function can only depend on the current state s. We require that the dynamics satisfy a decomposability Collect trajectories τ i = (s 0, a 0, ..., s T, a T) by executing π. Train D θ,φ via binary logistic regression to classify expert data τ E i from samples τ i.6: DISPLAYFORM0 Update π with respect to r θ,φ using any policy optimization method. 8: end for condition where functions over current states f (s) and next states g(s) can be isolated from their sum f (s) + g(s). This can be satisfied for example by adding self transitions at each state to an ergodic MDP, or any of the environments used in our experiments. The exact definition of the condition, as well as proof of the following statements are included in Appendix B. Theorem 5.1. Let r(s) be a ground-truth reward, and T be a dynamics model satisfying the decomposability condition. Suppose IRL recovers a state-only reward r (s) such that it produces an optimal policy in T: DISPLAYFORM1 DISPLAYFORM2 Then r is only a function of state. In the traditional IRL setup, where we learn the reward in a single MDP, our analysis motivates learning reward functions that are solely functions of state. If the ground truth reward is also only a function of state, this allows us to recover the true reward up to a constant. In the method presented in Section 4, we cannot learn a state-only reward function, r θ (s), meaning that we cannot guarantee that learned rewards will not be shaped. In order to decouple the reward function from the advantage, we propose to modify the discriminator of Sec. 4 with the form: DISPLAYFORM0 where f θ,φ is restricted to a reward approximator g θ and a shaping term h φ as DISPLAYFORM1 The additional shaping term helps mitigate the effects of unwanted shaping on our reward approximator g θ (and as we will show, in some cases it can account for all shaping effects). The entire training procedure is detailed in Algorithm 1. Our algorithm resembles GAIL BID10 and GAN-GCL BID5, where we alternate between training a discriminator to classify expert data from policy samples, and update the policy to confuse the discriminator. The advantage of this approach is that we can now parametrize g θ (s) as solely a function of the state, allowing us to extract rewards that are disentangled from the dynamics of the environment in which they were trained. In fact, under this restricted case, we can show the following under deterministic environments with a state-only ground truth reward (proof in Appendix C): DISPLAYFORM2 where r * is the true reward function. Since f * must recover to the advantage as shown in Sec. 4, h recovers the optimal value function V *, which serves as the reward shaping term. To be consistent with Sec. 4, an alternative way to interpret the form of Eqn. 4 is to view f θ,φ as the advantage under deterministic dynamics DISPLAYFORM3 In stochastic environments, we can instead view f (s, a, s) as a single-sample estimate of A * (s, a). In our experiments, we aim to answer two questions:1. Can AIRL learn disentangled rewards that are robust to changes in environment dynamics?2. Is AIRL efficient and scalable to high-dimensional continuous control tasks?To answer 1, we evaluate AIRL in transfer learning scenarios, where a reward is learned in a training environment, and optimized in a test environment with significantly different dynamics. We show that rewards learned with our algorithm under the constraint presented in Section 5 still produce optimal or near-optimal behavior, while naïve methods that do not consider reward shaping fail. We also show that in small MDPs, we can recover the exact ground truth reward function. To answer 2, we compare AIRL as an imitation learning algorithm against GAIL BID10 ) and the GAN-based GCL algorithm proposed by BID5, which we refer to as GAN-GCL, on standard benchmark tasks that do not evaluate transfer. Note that BID5 does not implement or evaluate GAN-GCL and, to our knowledge, we present the first empirical evaluation of this algorithm. We find that AIRL performs on par with GAIL in a traditional imitation learning setup while vastly outperforming it in transfer learning setups, and outperforms GAN-GCL in both settings. It is worth noting that, except for BID6, our method is the only IRL algorithm that we are aware of that scales to high dimensional tasks with unknown dynamics, and although GAIL BID10 resembles an IRL algorithm in structure, it does not recover disentangled reward functions, making it unable to re-optimize the learned reward under changes in the environment, as we illustrate below. For our continuous control tasks, we use trust region policy optimization BID20 as our policy optimization algorithm across all evaluated methods, and in the tabular MDP task, we use soft value iteration. We obtain expert demonstrations by training an expert policy on the ground truth reward, but hide the ground truth reward from the IRL algorithm. In this way, we simulate a scenario where we wish to use RL to solve a task but wish to refrain from manual reward engineering and instead seek to learn a reward function from demonstrations. Our code and additional supplementary material including videos will be available at https://sites.google.com/view/ adversarial-irl, and hyper-parameter and architecture choices are detailed in Appendix D. We first consider MaxEnt IRL in a toy task with randomly generated MDPs. The MDPs have 16 states, 4 actions, randomly drawn transition matrices, and a reward function that always gives a reward of 1.0 when taking an action from state 0. The initial state is always state 1.The optimal reward, learned reward with a state-only reward function, and learned reward using a state-action reward function are shown in FIG1. We subtract a constant offset from all reward functions so that they share the same mean for visualization -this does not influence the optimal policy. AIRL with a state-only reward function is able to recover the ground truth reward, but AIRL with a state-action reward instead recovers a shaped advantage function. We also show that in the transfer learning setup, under a new transition matrix T, the optimal policy under the state-only reward achieves optimal performance (it is identical to the ground truth reward) whereas the state-action reward only improves marginally over uniform random policy. The learning curve for this experiment is shown in Learning curve for the transfer learning experiment on tabular MDPs. Value iteration steps are plotted on the x-axis, against returns for the policy on the y-axis. To evaluate whether our method can learn disentangled rewards in higher dimensional environments, we perform transfer learning experiments on continuous control tasks. In each task, a reward is learned via IRL on the training environment, and the reward is used to reoptimize a new policy on a test environment. We train two IRL algorithms, AIRL and GAN-GCL, with state-only and stateaction rewards. We also include for directly transferring the policy learned with GAIL, and an oracle that involves optimizing the ground truth reward function with TRPO. Numerical for these environment transfer experiments are given in TAB2 The first task involves a 2D point mass navigating to a goal position in a small maze when the position of the walls are changed between train and test time. At test time, the agent cannot simply mimic the actions learned during training, and instead must successfully infer that the goal in the maze is to reach the target. The task is shown in FIG3. Only AIRL trained with state-only rewards is able to consistently navigate to the goal when the maze is modified. Direct policy transfer and state-action IRL methods learn rewards which encourage the agent to take the same path taken in the training environment, which is blocked in the test environment. We plot the learned reward in FIG4 In our second task, we modify the agent itself. We train a quadrupedal "ant" agent to run forwards, and at test time we disable and shrink two of the front legs of the ant such that it must significantly change its gait. We find that AIRL is able to learn reward functions that encourage the ant to move forwards, acquiring a modified gait that involves orienting itself to face the forward direction and crawling with its two hind legs. Alternative methods, including transferring a policy learned by GAIL (which achieves near-optimal performance with the unmodified agent), fail to move forward at all. We show the qualitative difference in behavior in FIG5.We have demonstrated that AIRL can learn disentangled rewards that can accommodate significant domain shift even in high-dimensional environments where it is difficult to exactly extract the true reward. GAN-GCL can presumably learn disentangled rewards, but we find that the trajectorycentric formulation does not perform well even in learning rewards in the original task, let alone transferring to a new domain. GAIL learns successfully in the training domain, but does not acquire a representation that is suitable for transfer to test domains. Illustration of the shifting maze task, where the agent (blue) must reach the goal (green). During training the agent must go around the wall on the left side, but during test time it must go around on the right. Bottom row: Behavior acquired by optimizing a state-only reward learned with AIRL on the disabled ant environment. Note that the ant must orient itself before crawling forward, which is a qualitatively different behavior from the optimal policy in the original environment, which runs sideways. Finally, we evaluate AIRL as an imitation learning algorithm against the GAN-GCL and the stateof-the-art GAIL on several benchmark tasks. Each algorithm is presented with 50 expert demonstrations, collected from a policy trained with TRPO on the ground truth reward function. For AIRL, we use an unrestricted state-action reward function as we are not concerned with reward transfer. Numerical are presented in TAB3.These experiments do not test transfer, and in a sense can be regarded as "testing on the training set," but they match the settings reported in prior work BID10.We find that the performance difference between AIRL and GAIL is negligible, even though AIRL is a true IRL algorithm that recovers reward functions, while GAIL does not. Both methods achieve close to the best possible on each task, and there is little room for improvement. This goes against the belief that IRL algorithms are indirect, and less efficient that direct imitation learning algorithms BID10. The GAN-GCL method is ineffective on all but the simplest Pendulum task when trained with the same number of samples as AIRL and GAIL. We find that a discriminator trained over trajectories easily overfits and provides poor learning signal for the policy. Our illustrate that AIRL achieves the same performance as GAIL on benchmark imitation tasks that do not require any generalization. On tasks that require transfer and generalization, illustrated in the previous section, AIRL outperforms GAIL by a wide margin, since our method is able to recover disentangled rewards that transfer effectively in the presence of domain shift. We presented AIRL, a practical and scalable IRL algorithm that can learn disentangled rewards and greatly outperforms both prior imitation learning and IRL algorithms. We show that rewards learned with AIRL transfer effectively under variation in the underlying domain, in contrast to unmodified IRL methods which tend to recover brittle rewards that do not generalize well and GAIL, which does not recover reward functions at all. In small MDPs where the optimal policy and reward are unambiguous, we also show that we can exactly recover the ground-truth rewards up to a constant. In this section, we show that the objective of AIRL matches that of solving the maximum causal entropy IRL problem. We use a similar method as BID5, which shows the justification of GAN-GCL for the trajectory-centric formulation. For simplicity we derive everything in the undiscounted case. A.1 SETUP As mentioned in Section 3, the goal of IRL can be seen as training a generative model over trajectories as: max st,at). We can compute the gradient with respect to θ as follows: DISPLAYFORM0 DISPLAYFORM1 Let p θ,t (s t, a t) = s t =t,a t =t p θ (τ) denote the state-action marginal at time t. Rewriting the above equation, we have: DISPLAYFORM2 As it is difficult to draw samples from p θ, we instead train a separate importance sampling distribution µ(τ). For the choice of this distribution, we follow BID5 and use a mixture policy µ(a|s) = 1 2 π(a|s) + 1 2p (a|s), wherep(a|s) is a rough density estimate trained on the demonstrations. This is justified as reducing the variance of the importance sampling estimate when the policy π(a|s) has poor coverage over the demonstrations in the early stages of training. Thus, our new gradient is: DISPLAYFORM3 We additionally wish to adapt the importance sampler π to reduce variance, by min- DISPLAYFORM4 The policy trajectory distribution factorizes as π(τ) = p(s 0)T −1 t=0 p(s t+1 |s t, a t)π(a t |s t). The dynamics and initial state terms inside π(τ) and p θ (τ) cancel, leaving the entropy-regularized policy objective: DISPLAYFORM5 In AIRL, we replace the cost learning objective with training a discriminator of the following form: DISPLAYFORM6 The objective of the discriminator is to minimize cross-entropy loss between expert demonstrations and generated samples: DISPLAYFORM7 We replace the policy optimization objective with the following reward: DISPLAYFORM8 A.2 DISCRIMINATOR OBJECTIVE First, we show that training the gradient of the discriminator objective is the same as Eqn. 5. We write the negative loss to turn the minimization problem into maximization, and use µ to denote a mixture between the dataset and policy samples. DISPLAYFORM9 Taking the derivative w.r.t. θ, DISPLAYFORM10 Multiplying the top and bottom of the fraction in the second expectation by the state marginal π(s t) = a π t (s t, a t), and grouping terms we get: DISPLAYFORM11 Where we have writtenp θ,t (s t, a t) = exp{f θ (s t, a t)}π t (s t), andμ to denote a mixture between p θ (s, a) and policy samples. This expression matches Eqn. 5, with f θ (s, a) serving as the reward function, when π maximizes the policy objective so thatp θ (s, a) = p θ (s, a). Next, we show that the policy objective matches that of the sampler of Eqn. 6. The objective of the policy is to maximize with respect to the rewardr t (s, a). First, note that: DISPLAYFORM0 Thus, whenr(s, a) is summed over entire trajectories, we obtain the entropy-regularized policy objective DISPLAYFORM1 Where f θ serves as the reward function. DISPLAYFORM2 The global minimum of the discriminator objective is achieved when π = π E, where π denotes the learned policy (the "generator" of a GAN) and π E denotes the policy under which demonstrations were collected BID8. At this point, the output of the discriminator is 1 2 for all values of s, a, meaning we have exp{f θ (s, a)} = π E (a|s), or f * (s, a) = log π E (a|s) = A * (s, a). In this section we include proofs for Theorems 5.1 and 5.2, and the condition on the dynamics necessary for them to hold. Definition B.1 (Decomposability Condition). Two states s 1, s 2 are defined as "1-step linked" under a dynamics or transition distribution T (s |a, s) if there exists a state s that can reach s 1 and s 2 with positive probability in one time step. Also, we define that this relationship can transfer through transitivity: if s 1 and s 2 are linked, and s 2 and s 3 are linked, then we also consider s 1 and s 3 to be linked. A transition distribution T satisfies the decomposability condition if all states in the MDP are linked with all other states. The key reason for needing this condition is that it allows us to decompose the functions state dependent f (s) and next state dependent g(s) from their sum f (s) + g(s), as stated below: Lemma B.1. Suppose the dynamics for an MDP satisfy the decomposability condition. Then, for functions a(s), b(s), c(s), d(s), if for all s, s: DISPLAYFORM0 Then for for all s, a(s) = c(s) + const DISPLAYFORM1 Proof. Rearranging, we have: DISPLAYFORM2 for some function only dependent on s. In order for this to be representable, the term b(s) − d(s) must be equal for all successor states s from s. Under the decomposability condition, all successor states must therefore be equal in this manner through transitivity, meaning we have b(s) − d(s) must be constant with respect to s. Therefore, a(s) = c(s) + const. We can then substitute this expression back in to the original equation to derive b(s) = d(s) + const. We consider the case when the ground truth reward is state-only. We now show that if the learned reward is also state-only, then we guarantee learning disentangled rewards, and vice-versa (sufficiency and necessity). Theorem 5.1. Let r(s) be a ground-truth reward, and T be a dynamics model satisfying the decomposability condition. Suppose IRL recovers a state-only reward r (s) such that it produces an optimal policy in T: Q * r,T (s, a) = Q * r,T (s, a) − f (s) Then, r (s) is disentangled with respect to all dynamics. Proof. We show that r (s) must equal the ground-truth reward up to constants (modifying rewards by constants does not change the optimal policy). Let r (s) = r(s) + φ(s) for some arbitrary function of state φ(s). We have: Proof. We show the converse, namely that if r (s, a, s) can depend on a or s, then there exists a dynamics model T such that the optimal policy is changed, i.e. Q * r,T (s, a) = Q * r,T (s, a) + f (s) ∀s, a. Consider the following 3-state MDP with deterministic dynamics and starting state S: We denote the action with a small letter, i.e. taking the action a from S brings the agent to state A, receiving a reward of 0. For simplicity, assume the discount factor γ = 1. The optimal policy here takes the a action, returns to s, and repeat for infinite positive reward. An action-dependent reward which induces the same optimal policy would be to move the reward from the action returning to s to the action going to a or s: Optimizing r on this new MDP in a different policy than optimizing r, as the agent visits B, ing in infinite negative reward. In this section, we prove that AIRL can recover the ground truth reward up to constants if the ground truth is only a function of state r(s). For simplicity, we consider deterministic environments, so that s is uniquely defined by s, a, and we restrict AIRL's reward estimator g to only be a function of state.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkHywl-A-
We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments.
We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to \citet{zhang2016understanding}, who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the ``noise scale" $g = \epsilon (\frac{N}{B} - 1) \approx \epsilon N/B$, where $\epsilon$ is the learning rate, $N$ the training set size and $B$ the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, $B_{opt} \propto \epsilon N$. We verify these predictions empirically. This paper shows Bayesian principles can explain many recent observations in the deep learning literature, while also discovering practical new insights. BID27 trained deep convolutional networks on ImageNet and CIFAR10, achieving excellent accuracy on both training and test sets. They then took the same input images, but randomized the labels, and found that while their networks were now unable to generalize to the test set, they still memorized the training labels. They claimed these contradict learning theory, although this claim is disputed BID18 BID7. Nonetheless, their beg the question; if our models can assign arbitrary labels to the training set, why do they work so well in practice? Meanwhile BID19 observed that if we hold the learning rate fixed and increase the batch size, the test accuracy usually falls. This striking shows improving our estimate of the full-batch gradient can harm performance. BID11 observed a linear scaling rule between batch size and learning rate in a deep ResNet, while BID15 proposed a square root rule on theoretical grounds. Many authors have suggested "broad minima" whose curvature is small may generalize better than "sharp minima" whose curvature is large BID4 BID14. Indeed, BID7 argued the of BID27 can be understood using "nonvacuous" PAC-Bayes generalization bounds which penalize sharp minima, while BID19 showed stochastic gradient descent (SGD) finds wider minima as the batch size is reduced. However BID6 challenged this interpretation, by arguing that the curvature of a minimum can be arbitrarily increased by changing the model parameterization. In this work we show:• The of BID27 are not unique to deep learning; we observe the same phenomenon in a small "over-parameterized" linear model. We demonstrate that this phenomenon is straightforwardly understood by evaluating the Bayesian evidence in favor of each model, which penalizes sharp minima but is invariant to the model parameterization.• SGD integrates a stochastic differential equation whose "noise scale" g ≈ N/B, where is the learning rate, N training set size and B batch size. Noise drives SGD away from sharp minima, and therefore there is an optimal batch size which maximizes the test set accuracy. This optimal batch size is proportional to the learning rate and training set size 1.We describe Bayesian model comparison in section 2. In section 3 we replicate the observations of BID27 in a linear model, and show they are explained by the Bayesian evidence. In section 4 we show there is an optimum batch size which maximizes the test set accuracy, and in section 5 we derive scaling rules between the optimum batch size, learning rate, training set size and momentum coefficient. Throughout this work, "generalization gap" refers to the gap in test accuracy between small and large batch SGD training, not the gap in accuracy between training and test sets. Bayesian model comparison was first applied to neural networks in BID22. We provide a brief tutorial here, since the theory is central to the remainder of the paper. For simplicity we first consider a classification model M with a single parameter ω, training inputs x and training labels y. We can infer a posterior probability distribution over the parameter by applying Bayes theorem, DISPLAYFORM0 The likelihood, P (y|ω, DISPLAYFORM1, where H(ω; M) = − i ln (P (y i |ω, x i ; M)) denotes the cross-entropy of unique categorical labels. We typically use a Gaussian prior, P (ω; M) = λ/2πe−λω 2 /2, and therefore the posterior probability density of the parameter given the training data, P (ω|y, x; M) ∝ λ/2πe −C(ω;M), where C(ω; M) = H(ω; M) + λω 2 /2 denotes the L2 regularized cross entropy, or "cost function", and λ is the regularization coefficient. The value ω 0 which minimizes the cost function lies at the maximum of this posterior. To predict an unknown label y t of a new input x t, we should compute the integral, P (y t |x t, y, x; M) = dω P (y t |ω, x t ; M)P (ω|y, x; M) DISPLAYFORM2 However these integrals are dominated by the region near ω 0, and since P (y t |ω, x t ; M) is smooth we usually approximate P (y t |x t, x, y; M) ≈ P (y t |ω 0, x t ; M). Having minimized C(ω; M) to find ω 0, we now wish to compare two different models and select the best one. The probability ratio, DISPLAYFORM3 The second factor on the right is the prior ratio, which describes which model is most plausible. To avoid unnecessary subjectivity, we usually set this to 1. Meanwhile the first factor on the right is the evidence ratio, which controls how much the training data changes our prior beliefs. BID10 showed that maximizing the evidence (or "marginal likelihood") minimizes a PAC-Bayes generalization bound. To compute it, we evaluate the normalizing constant of equation 1, DISPLAYFORM4 Notice that the evidence is computed by integrating out the parameters; and consequently it is invariant to the model parameterization. Since this integral is dominated by the region near the minimum ω 0, we can estimate the evidence by Taylor expanding DISPLAYFORM5 Within this "Laplace" approximation, the evidence is controlled by the value of the cost function at the minimum, and by the logarithm of the ratio of the curvature about this minimum compared to the regularization constant. Thus far we have considered models of a single parameter; in realistic models with many parameters DISPLAYFORM6 ω0, where |∇∇C(ω)| ω0 is the determinant of the Hessian, and p denotes the number of model parameters BID17. The determinant of the Hessian is simply the product of its eigenvalues, (p i=1 λ i), and thus, DISPLAYFORM7 The contribution (λ DISPLAYFORM8 ω0) is often called the "Occam factor", because it enforces Occam's razor; when two models describe the data equally well, the simpler model is usually better BID12. Minima with low curvature are simple, because the parameters do not have to be finetuned to fit the data. Intuitively, the Occam factor describes the fraction of the prior parameter space consistent with the data. Since this fraction is always less than one, we propose to approximate equation 9 away from local minima by only performing the summation over eigenvalues λ i ≥ λ. The evidence can be reframed in the language of information theory, whereby Occam's factor penalizes the amount of information the model must learn about the parameters to accurately model the training data BID13 BID0 BID24.In this work, we will compare the evidence against a null model which assumes the labels are entirely random, assigning equal probability to each class. This unusual model has no parameters, and so the evidence is controlled by the likelihood alone, P (y|x; N U LL) = (1/n) N = e −N ln (n), where n denotes the number of model classes and N the number of training labels. Thus the evidence ratio, DISPLAYFORM9 Where DISPLAYFORM10 is the log evidence ratio in favor of the null model. Clearly, we should only assign any confidence to the predictions of our model if E(ω 0) < 0.The evidence supports the intuition that broad minima generalize better than sharp minima, but unlike the curvature it does not depend on the model parameterization. BID6 showed one can increase the Hessian eigenvalues by rescaling the parameters, but they must simultaneously rescale the regularization coefficients, otherwise the model changes. Since Occam's factor arises from the log ratio, ln (λ i /λ), these two effects cancel out 2. It is difficult to evaluate the evidence for deep networks, as we cannot compute the Hessian of millions of parameters. Additionally, neural networks exhibit many equivalent minima, since we can permute the hidden units without changing the model. To compute the evidence we must carefully account for this "degeneracy". We argue these issues are not a major limitation, since the intuition we build studying the evidence in simple cases will be sufficient to explain the of both BID27 and BID19. BID27 showed that deep neural networks generalize well on training inputs with informative labels, and yet the same model can drastically overfit on the same input images when the labels are randomized; perfectly memorizing the training set. To demonstrate that these observations are not unique to deep networks, let's consider a far simpler model; logistic regression. We form a small balanced training set comprising 800 images from MNIST, of which half have true label "0" and half true label "1". Our test set is also balanced, comprising 5000 MNIST images of zeros and 5000 MNIST images of ones. There are two tasks. In the first task, the labels of both the training and test sets are randomized. In the second task, the labels are informative, matching the true MNIST labels. Since the images contain 784 pixels, our model has just 784 weights and 1 bias. We show the accuracy of the model predictions on both the training and test sets in figure 1. When trained on the informative labels, the model generalizes well to the test set, so long as it is weakly regularized. However the model also perfectly memorizes the random labels, replicating the observations of BID27 is observed as the regularization coefficient increases. For completeness, we also evaluate the mean margin between training examples and the decision boundary. For both random and informative labels, the margin drops significantly as we reduce the regularization coefficient. When weakly regularized, the mean margin is roughly 50% larger for informative labels than for random labels. Now consider figure 2, where we plot the mean cross-entropy of the model predictions, evaluated on both training and test sets, as well as the Bayesian log evidence ratio defined in the previous section. Looking first at the random label experiment in figure 2a, while the cross-entropy on the training set vanishes when the model is weakly regularized, the cross-entropy on the test set explodes. Not only does the model make random predictions, but it is extremely confident in those predictions. As the regularization coefficient is increased the test set cross-entropy falls, settling at ln 2, the crossentropy of assigning equal probability to both classes. Now consider the Bayesian evidence, which we evaluate on the training set. The log evidence ratio is large and positive when the model is weakly regularized, indicating that the model is exponentially less plausible than assigning equal probabilities to each class. As the regularization parameter is increased, the log evidence ratio falls, but it is always positive, indicating that the model can never be expected to generalize well. Now consider figure 2b (informative labels). Once again, the training cross-entropy falls to zero when the model is weakly regularized, while the test cross-entropy is high. Even though the model makes accurate predictions, those predictions are overconfident. As the regularization coefficient increases, the test cross-entropy falls below ln 2, indicating that the model is successfully generalizing to the test set. Now consider the Bayesian evidence. The log evidence ratio is large and positive when the model is weakly regularized, but as the regularization coefficient increases, the DISPLAYFORM0 Figure 2: The cross-entropy and log evidence ratio, evaluated on random (a) or informative (b) labels. The evidence, evaluated on the training set, is strongly correlated with the test cross-entropy.log evidence ratio drops below zero, indicating that the model is exponentially more plausible than assigning equal probabilities to each class. As we further increase the regularization, the log evidence ratio rises to zero while the test cross-entropy rises to ln 2. Test cross-entropy and Bayesian evidence are strongly correlated, with minima at the same regularization strength. Bayesian model comparison has explained our in a logistic regression. Meanwhile, BID20 showed the largest Hessian eigenvalue also increased when training on random labels in deep networks, implying the evidence is falling. We conclude that Bayesian model comparison is quantitatively consistent with the of BID27 in linear models where we can compute the evidence, and qualitatively consistent with their in deep networks where we cannot. BID7 recently demonstrated the of BID27 can also be understood by minimising a PAC-Bayes generalization bound which penalizes sharp minima. We showed above that generalization is strongly correlated with the Bayesian evidence, a weighted combination of the depth of a minimum (the cost function) and its breadth (the Occam factor). Consequently Bayesians often add isotropic Gaussian noise to the gradient BID26.In appendix A, we show this drives the parameters towards broad minima whose evidence is large. The noise introduced by small batch training is not isotropic, and its covariance matrix is a function of the parameter values, but empirically Keskar et al. FORMULA0 found it has similar effects, driving the SGD away from sharp minima. This paper therefore proposes Bayesian principles also account for the "generalization gap", whereby the test set accuracy often falls as the SGD batch size is increased (holding all other hyper-parameters constant). Since the gradient drives the SGD towards deep minima, while noise drives the SGD towards broad minima, we expect the test set performance to show a peak at an optimal batch size, which balances these competing contributions to the evidence. We were unable to observe a generalization gap in linear models (since linear models are convex there are no sharp minima to avoid). Instead we consider a shallow neural network with 800 hidden units and RELU hidden activations, trained on MNIST without regularization. We use SGD with a momentum parameter of 0.9. Unless otherwise stated, we use a constant learning rate of 1.0 which does not depend on the batch size or decay during training. Furthermore, we train on just 1000 images, selected at random from the MNIST training set. This enables us to compare small batch to full batch training. We emphasize that we are not trying to achieve optimal performance, but to study a simple model which shows a generalization gap between small and large batch training. In figure 3, we exhibit the evolution of the test accuracy and test cross-entropy during training. Our small batches are composed of 30 images, randomly sampled from the training set. Looking first at figure 3a, small batch training takes longer to converge, but after a thousand gradient updates a clear generalization gap in model accuracy emerges between small and large training batches. Now consider figure 3b. While the test cross-entropy for small batch training is lower at the end Figure 5: a) The test set accuracy as a function of batch size, for a range of learning rates. The performance peak shifts to the right as we increase, but the overall performance falls once 3. b) The best observed batch size is proportional to the learning rate across two orders of magnitude. of training; the cross-entropy of both small and large training batches is increasing, indicative of over-fitting. Both models exhibit a minimum test cross-entropy, although after different numbers of gradient updates. Intriguingly, we show in appendix B that the generalization gap between small and large batch training shrinks significantly when we introduce L2 regularization. From now on we focus on the test set accuracy (since this converges as the number of gradient updates increases). In figure 4a, we exhibit training curves for a range of batch sizes between 1 and 1000. We find that the model cannot train when the batch size B 10. In figure 4b we plot the mean test set accuracy after 10000 training steps. A clear peak emerges, indicating that there is indeed an optimum batch size which maximizes the test accuracy, consistent with Bayesian intuition. The of BID19 focused on the decay in test accuracy above this optimum batch size. We showed above that the test accuracy peaks at an optimal batch size, if one holds the other SGD hyper-parameters constant. We argued that this peak arises from the tradeoff between depth and breadth in the Bayesian evidence. However it is not the batch size itself which controls this tradeoff, but the underlying scale of random fluctuations in the SGD dynamics. We now identify this SGD "noise scale", and use it to derive three scaling rules which predict how the optimal batch size depends on the learning rate, training set size and momentum coefficient. A gradient update, DISPLAYFORM0 (a) (b) Figure 6: a) The test accuracy as a function of batch size, for a range of training set sizes. To reduce noise, we average each curve over five experiments. The performance peak shift to the right as we increase the size of the training set. Unsurprisingly, the overall model performance also improves.b) The best observed batch size is proportional to the size of the training set once N 20000.where is the learning rate, N the training set size, DISPLAYFORM1 dCi dω the true gradient, and DISPLAYFORM2 dCi dω the estimated gradient evaluated on a mini-batch. The expected gradient of a single example, DISPLAYFORM3 is a matrix describing the gradient covariances, which are a function of the current parameter values. We adopt the central limit theorem and model the gradient error α = (dĈ dω − dC dω) with Gaussian random noise (We discuss this approximation briefly in appendix C). It is easy to show that α = 0, while α 2 = N (DISPLAYFORM4 To continue, we interpret equation 11 as the discrete update of a stochastic differential equation BID21 BID9, DISPLAYFORM5 Where t is a continuous variable, η(t) represents noise, η(t) = 0 and η(t)η(t) = gF (ω)δ(t−t). The constant g controls the scale of random fluctuations in the dynamics. To relate this differential equation to the SGD, we compute a gradient update ∆ω = DISPLAYFORM6 Finally, to measure g, we equate the variance in this gradient update to the variance in equation 11, DISPLAYFORM7 Rearranging, the SGD noise scale g = (DISPLAYFORM8 The noise scale falls when the batch size increases, consistent with our earlier observation of an optimal batch size B opt while holding the other hyper-parameters fixed. Notice that one would equivalently observe an optimal learning rate if one held the batch size constant. A similar analysis of the SGD was recently performed by BID23, although their treatment only holds near local minima where the covariances F (ω) are stationary. Our analysis holds throughout training, which is necessary since BID19 found that the beneficial influence of noise was most pronounced at the start of training. When we vary the learning rate or the training set size, we should keep the noise scale fixed, which implies that B opt ∝ N. In figure 5a, we plot the test accuracy as a function of batch size after (10000/) training steps, for a range of learning rates. Exactly as predicted, the peak moves to the right as increases. Additionally, the peak test accuracy achieved at a given learning rate does not begin to fall until ∼ 3, indicating that there is no significant discretization error in integrating the stochastic differential equation below this point. Above this point, the discretization error begins to dominate and the peak test accuracy falls rapidly. In figure 5b, we plot the best observed batch size as a function of learning rate, observing a clear linear trend, B opt ∝. The error bars indicate the distance from the best observed batch size to the next batch size sampled in our experiments. This scaling rule allows us to increase the learning rate with no loss in test accuracy and no increase in computational cost, simply by simultaneously increasing the batch size. We can then exploit increased parallelism across multiple GPUs, reducing model training times BID11. A similar scaling rule was independently proposed by and BID3, although neither work identifies the existence of an optimal noise scale. A number of authors have proposed adjusting the batch size adaptively during training BID8 BID2 BID5, while BID1 proposed linearly coupling the learning rate and batch size within this framework. In Smith et al. FORMULA0, we show empirically that decaying the learning rate during training and increasing the batch size during training are equivalent. In figure 6a we exhibit the test set accuracy as a function of batch size, for a range of training set sizes after 10000 steps (= 1 everywhere). Once again, the peak shifts right as the training set size rises, although the generalization gap becomes less pronounced as the training set size increases. In figure 6b, we plot the best observed batch size as a function of training set size; observing another linear trend, B opt ∝ N. This scaling rule could be applied to production models, progressively growing the batch size as new training data is collected. We expect production datasets to grow considerably over time, and consequently large batch training is likely to become increasingly common. Finally, in appendix D we extend our analysis to SGD with momentum, identifying the noise scale, g ≈ N B(1−m), where m denotes the momentum coefficient. Notice that this reduces to the noise scale of conventional SGD as m → 0. When m > 0, we obtain an additional scaling rule B opt ∝ 1/(1 − m). This scaling rule predicts that the optimal batch size will increase when the momentum coefficient is increased. In FIG2 we plot the test set performance as a function of batch size after 10000 gradient updates (= 1 everywhere), for a range of momentum coefficients. In FIG2, we plot the best observed batch size as a function of the momentum coefficient, and fit our to the scaling rule above; obtaining remarkably good agreement. We propose a simple heuristic for tuning the batch size, learning rate and momentum coefficient in appendix E. Just like deep neural networks, linear models which generalize well on informative labels can memorize random labels of the same inputs. These observations are explained by the Bayesian evidence, which is composed of the cost function and an "Occam factor". The Occam factor penalizes sharp minima but it is invariant to changes in model parameterization. Mini-batch noise drives SGD away from sharp minima, and therefore there is an optimum batch size which maximizes the test accuracy. Interpreting SGD as the discretization of a stochastic differential equation, we predict this optimum batch size should scale linearly with both the learning rate and the training set size, B opt ∝ N. We derive an additional scaling rule, B opt ∝ 1/(1 − m), between the optimal batch size and the momentum coefficient. We verify these scaling rules empirically and discuss their implications. Instead of minimizing the cost function, Bayesian usually prefer to sample parameter values from the posterior , DISPLAYFORM0 where C(ω; M) is the regularized summed cost function, as shown in section 2 of the main text. It is well known that one can sample this posterior by simulating the overdamped Langevin equation BID9, which is described by the stochastic differential equation, DISPLAYFORM1 where t is a continuous variable, and η(t) describes Gaussian noise with mean η(t) = 0 and variance η(t)η(t) = 2T Iδ(t−t). The matrix I denotes the identity, while T is the "temperature". Notice this Langevin equation is extremely similar to the stochastic differential equation of SGD, discussed in section 5 of the main text. Indeed, if the gradient covariances F (ω) were stationary and proportional to the identity, then the SGD would integrate an overdamped Langevin equation with temperature proportional to the SGD noise scale g. As t → ∞, the probability of sampling any particular parameter vector ω from the Langevin equation, P (ω, t → ∞) ∝ e −C/T.We obtain posterior samples if T = 1. In order to draw posterior samples in practice, we repeatedly integrate the Langevin equation (at temperature T = 1), over a finite step t → t + /N, DISPLAYFORM2 where α denotes a Gaussian random variable with mean α = 0 and variance α 2 = 2 I/N, which introduces isotropic noise to the gradient update as described in section 4 of the main text. Note that, since C(ω; M) denotes the summed cost function, we chose to scale our step size by the training set size N. This also matches our treatment of SGD in section 5 of the main text. The larger the step size, the greater the discretization error, but if is sufficiently small and we iterate equation 17 sufficiently many times, we will obtain valid samples from the posterior. Since the probability of sampling any given parameter vector ω is proportional to the posterior, the probability of sampling a parameter vector belonging to any given local minimum is proportional to the integral of the posterior over the bowl of attraction D which surrounds that minimum. DISPLAYFORM3 Meanwhile we showed in section 2 of the main text that the evidence in favor of a model is proportional to the integral of the posterior over all parameter space. DISPLAYFORM4 As we discussed, this evidence is dominated by the contributions to the integral near local minima. In a convex model, there is only one such minimum; which allows us to accurately estimate the model evidence. Meanwhile, in non-convex models, there are many such minima, and so we can instead define the evidence as a sum over local evidences in favor of each minimum, DISPLAYFORM5 where we define the evidence in favor of a minimum as the integral over the local bowl of attraction, DISPLAYFORM6 Since the combined bowls of attraction of all the minima perfectly tile the entire parameter space, equations 19 and 20 are equivalent. Meanwhile, equating equations 18 and 21 we find that, when one performs Bayesian posterior sampling, the probability of sampling the parameter vector from a local minimum is proportional to the evidence in favor of that minimum. This demonstrates that bayesian posterior samples are biased in favor of local minima whose evidence is large, which explains why a single posterior sample ω p often achieves lower test error than the cost function minimum ω 0. In the experiments of section 4 of the main text, the L2 regularization coefficient λ = 0. In figure 8, we plot the evolution of the training curves when λ = 0.1, for both small batch and full batch training. Excluding the regularization parameter, these experiments are identical to figure 3. To our surprise, regularized full batch training took longer to converge than small batch training. In another surprise, regularization significantly reduced the size of the generalization gap. While large batch regularized training achieves slightly lower test set accuracy than unregularized small batch training, it also achieves lower test cross-entropy. The test cross-entropy of our regularized models does not degrade after many gradient updates, removing the need for early stopping. In section 5 of the main text, we approximated the difference between the full batch gradient and the mini-batch gradient estimate, α = (DISPLAYFORM0, by a Gaussian random variable. This enabled us to derive the scaling rules, which we verified empirically. We motivated this assumption by reference to the central limit theorem, which states that the gradient error will tend towards Gaussian noise as {N → ∞, B → ∞, B N}, so long as the distribution of gradients over individual training examples does not have heavy tails. In practice neither N nor B is infinite, and the gradient distribution may be heavy tailed, especially when gradients are sparse. Nonetheless the central limit theorem tends to be surprisingly robust in practice, and is consequently widely used. It is beyond the scope of this work to perform a thorough study of the gradient noise distribution in deep networks. However as a brief proof of principle, we present the distribution of the gradient immediately after random initialization in figure 9, for the shallow neural network discussed in sections 4 and 5 of the main text. In figure 9a, we present the distribution over the individual training examples, of the gradient of a single matrix element in the softmax output layer, chosen randomly. The distribution is double peaked and clearly not Gaussian. However in FIG2, we plot the distribution of the gradient of the same matrix element, when averaged over randomly sampled mini-batches of 30 images (without replacement). A single peak emerges, and while the distribution is still slightly skewed, it is clearly already approaching the Gaussian limit. We conclude that the Gaussian approximation is likely to be reasonable for commonly used mini-batch sizes. Momentum simulates a generalized Langevin equation (with structured fluctuations), DISPLAYFORM0 λ is the "damping coefficient" and η(t) describes Gaussian noise, whose statistics η(t) = 0 and η(t)η(t) = gλF (w)δ(t − t). As before, the coefficient g describes the scale of random fluctuations in the dynamics, and F (ω) describes the gradient covariances between parameters. We include a factor of λ in the noise variance to satisfy the fluctuation-dissipation theorem, which states that we can vary the damping coefficient without changing the probability of sampling any particular configuration of parameters in the limit t → ∞, if we proportionally increase the noise variance. To relate this Langevin equation to the usual momentum equations, we first re-express it as two coupled first order differential equations, dp dt = −λp − dC dω + η(t), DISPLAYFORM1 Integrating over a single step ∆t/N, DISPLAYFORM2 As observed in the main text, if we wish to keep the scale of random fluctuations constant, then we should scale the batch size B ∝ N. We also predict an additional scaling relation between the batch size and the momentum parameter, B ∝ 1/(1 − m). Note that one can also interpret ef f = /(1 − m) as the "effective learning rate". Here we propose a simple heuristic for tuning the batch size, learning rate and momentum parameter; in order to maximize both test accuracy and batch size (enabling parallel training across many machines). Note that this is only worthwhile if one expects to retrain a model many times.1. Set the learning rate to 0.1 and the momentum coefficient to 0.9. Run experiments at a range of batch sizes on a logarithmic scale, and identify the optimal batch size which maximizes the validation set accuracy. If training is not stable, reduce the learning rate, and repeat. 2. Repeatedly increase the batch size by a factor of 3, while scaling the learning rate ∝ B, until the validation set accuracy starts to fall. Then repeatedly increase the batch size by a factor of 3, while scaling the momentum coefficient (1 − m) ∝ 1/B, until either the validation set accuracy falls or the batch size reaches the limits of your hardware. 3. Having identified the final learning rate and momentum parameter, retune the batch size on a linear scale in the local neighborhood of the current batch size. We believe that this simple procedure will increase the test accuracy, reduce the cost of tuning hyperparameters, and significantly reduce the final number of gradient updates required to train a model.
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJij4yg0Z
Generalization is strongly correlated with the Bayesian evidence, and gradient noise drives SGD towards minima whose evidence is large.
In the industrial field, the positron annihilation is not affected by complex environment, and the gamma-ray photon penetration is strong, so the nondestructive detection of industrial parts can be realized. Due to the poor image quality caused by gamma-ray photon scattering, attenuation and short sampling time in positron process, we propose the idea of combining deep learning to generate positron images with good quality and clear details by adversarial nets. The structure of the paper is as follows: firstly, we encode to get the hidden vectors of medical CT images based on transfer Learning, and use PCA to extract positron image features. Secondly, we construct a positron image memory based on attention mechanism as a whole input to the adversarial nets which uses medical hidden variables as a query. Finally, we train the whole model jointly and update the input parameters until convergence. Experiments have proved the possibility of generating rare positron images for industrial non-destructive testing using countermeasure networks, and good imaging have been achieved. In recent years, with the advancement of science and technology, especially the rapid development of high-end manufacturing, in the field of industrial non-destructive testing, in many cases, it is necessary to perform defect detection without damaging or affecting the performance and internal structure of the device under test. Therefore, there is an increasing demand for corresponding detection devices. In complex industrial environments (such as aviation, internal combustion engines, chemical engineering, etc.), it is of great research significance to detect faults and defects in closed chambers. In this paper, the use of positron annihilation gamma photon imaging positron emission imaging technology for industrial nondestructive testing is studied. The occurrence of positron annihilation is not affected by factors such as high temperature, high pressure, corrosion, etc., so it can penetrate the dense metal material cavity, realize the undisturbed and non-destructive trace imaging of the detected object, and obtain the detected object after processing. Describe the image and perform a state analysis. Therefore, the quality of imaging technology directly affects the analysis of fault detection . Positron Emission Tomography (PET) was first used in medical imaging. The principle is that when a radioactive positron nucleus decays, a proton in the nucleus is converted into a neutron, and a positron and a neutron are released. The positron will quickly combine with the electrons in the material in a very short time, causing a positron annihilation phenomenon, producing a pair of gamma photon pairs with opposite directions and energy of 511KeV. Photon pairs are collected, identified, processed, and finally reconstructed to obtain medical images. Commonly used PET reconstruction algorithms are analytic method (K, 2000) and statistical method . The currently widely used algorithms are MLEM and OSEM. At present, PET technology has been widely used in the clinical diagnosis of human diseases. The advantages are quite obvious, the imaging quality is higher, and it shows great advantages in medical research. The principle of positron emission in industrial non-destructive fields is similar to medical imaging, but it has its own unique difficulties: the detection environment is more harsh, the sampling time is short, and at the same time, due to the phenomenon of scattering and attenuation of photons, industrial positron imaging is obtained. The image quality is even worse. Therefore, the reconstructed image needs to be further processed to obtain a higher quality image. In this paper, we propose adversarial networks of positron image memory module based on attention mechanism. Using medical images as basic data sets, introducing knowledge of migration learning, building memory module according to the contribution of detail features to images, a positron image generation network in the field of industrial non-destructive testing is obtained through joint training, thus achieving higher quality generation of industrial positron images. In summary, our main contributions in this paper are as follows: We are the first to advocate an idea of using Generative Adversarial Networks to enhance the detail of the positron image in the industrial non-destructive field, and realize the generation and processing of the scarce image data in the professional field. We use the medical CT image dataset as the basic training sample of the network framework, which is based on the idea of migration learning, and then extract the features of a small number of industrial non-destructively detected positron images, which can improve the details of the generated images, and make the network model have better applicability in the field of industrial non-destructive testing. We combine the attention-based mechanism in the professional domain image feature extraction. By constructing a memory module containing industrial positron image features, we can generate image generation in a specific domain, and finally conduct an industrial lossless positron image generation model. We train the whole network jointly, through the discriminant network of the antagonistic generation network, the front-end network was back-propagated, the input parameters were updated, and the model was optimized. Finally, the convergence was achieved and The Turing test was passed successfully. Model of GAN: Since GAN was proposed in 2014, it has become a research hotspot in the field of unsupervised learning, and related researches continue to increase. For the improvement of network structure, it mainly focuses on training stability and mode collapse. combine CNN with GAN to realize image generation, which makes GAN move from theory to practice, and establishes the framework and training mode of the whole GAN model. Arjovsky et al. M et al. propose Wasserstein GAN (WGAN), which use Earth-Mover instead of JS divergence to measure the distance between the real sample and the generated sample distribution, and the problem of pattern collapse was well solved. Mirza proposed Conditional Generative Adversarial Nets (CGAN) in 2014M & S, which transformed unsupervised tasks into supervised tasks, thus improving the stability of training. Mao, X et al. X et al. propose Least Squares Generative Adversarial Networks (LSGAN), which replace the loss function in the original network with the least squares loss function, alleviating the problem of training instability and insufficient diversity of generated images in GAN. Karras T et al. T et al. of NVIDIA team propose a progressive structure model to realize the transition from low-resolution to high-resolution image, and the generative model of high definition image can be trained smoothly. DeepMind team (A et al., 2018) introduce the idea of orthogonal regularization into GAN, which greatly improve the generation performance of GAN by truncating the input prior distribution Z in time, and contribute a lot of experience of parameter adjustment in the process of training mode. EBGAN (J et al., 2016) regards the discriminative network in GAN as an energy function and adds reconstruction error to it, which can solve the problem of mode collapse. Unrolled GAN (L et al., 2016) modifies the loss function of the generative network so that the generator can take into account the changes of discriminator after training K times and avoid the mode collapse caused by switching between different modes. DRAGAN (N et al., 2017) introduces the "no regret algorithm" in game theory, and construct gradient penalty scheme to avoid unwanted partial equilibrium. It can solve mode collapse problem and improve training efficiency. Domain Application: The greatest advantage of GAN is that it can generate real-like samples without any explicit modeling of data distribution in the whole generation process. Therefore, GAN has achieved good in many fields such as image, text, voice and so on. Image generation (P et al., 2016) (T et al., 2017) (Z et al., 2017), per resolution processing of image (C et al., 2017), object detection (Y et al., 2018) (J et al., 2017), object transfiguration (S et al., 2017), joint image generation , video generation , text to image (H et al., 2017)(A et al., 2017), text generation (J et al., 2018), speech conversion (C et al., 2017), domain adaptation (J et al., 2017), and so on. These researches provide more possibilities for the practical application of GAN in the field. to realize the batch generation of realistic medical images(prostatic lesion plaques, lung cancer cakes, retina), and the resolution reaches 16*16,56*56 and 64*64 respectively, the of which can pass the Turing test successfully. (C et al., 2018a) uses DCGAN to learn higher resolution MR images distribution from a small number of samples, and compare them with the real images, so as to achieve the effect of "false and true". (C et al., 2018b) uses PGGAN to synthesize skin lesion images, and it successfully showed amazing and highly realistic synthetic images. Conditional medical image generation studies are as follows: (D et al., 2017) realizes CT image synthesis from MR images by means of 3D full convolution network. The loss function of the network is composed of reconstruction loss and gradient loss, and it is also trained by combining additional confrontation network to improve the reality of the synthesized images. (P et al., 2017) compress the images of vascular tree into a multivariate normal distribution by means of the antagonistic automatic encoder. The normal distribution is sampled to synthesize any high resolution vascular tree image, and then the end-to-end framework for high resolution retinal image synthesis is obtained by using image conversion model (P et al., 2016). (T et al., 2017) proposes a two-stage GAN, the first one of which is to synthesize vessel tree images from noise training, and the second network uses image translation framework to generate real, high-resolution pairs of groundtruth vessel segmentation and the corresponding eye fundus image. (W et al., 2018) uses CGAN as the framework to synthesize 200*200 PET images by using CT image and binary label graph. The researchs contribution lies in the construction of a two-channel generation network to achieve a more realistic global output and it called multi-channel GAN. (F & D, 2018) sets up a multi-stage generator to obtain speckle images, low-resolution images, and high-resolution images in turn by intra-vascular ultrasound simulation of tissue maps according to different generating networks. (A & G, 2017) conducts joint learning by adding a specific task network to CGAN, and then obtain a network model that retain specific task characteristics. (M et al., 2018) uses WGAN as the network framework, and use noise and attribute vectors as inputs to generating high-resolution 3D images. In addition, in the aspects of image segmentation, reconstruction, detection, denoising, recognition, classification, etc, and the use of GAN model has also achieved some research , which provides feasibility for the application of GAN in professional and specific field. Traditional positron image processing in the field of industrial nondestructive processing focuses on the improvement of image reconstruction algorithm, but due to the inherent characteristics of small sampling points and short time, the quality of the obtained image is poor, and the stability of the image affected by the field is poor. Therefore, in this paper, we innovatively introduce the knowledge of neural network for positron image processing. And through the experimental verification, a high quality image is obtained. The structure of the whole positron image countermeasure generation network based on memory module is shown in the following figure 1: The original GAN model consists of two parts: the discriminative nets and the generative nets. Through training, the generator generates pseudo-data as close as possible to the real data distribution. The initial input of the network is random noise signal z, then map it to a new data space by generator to get the generated data G(z), after that the discriminator outputs a probability value through a binary classification of real data and generated data. The confidence level indicates the performance of the generator. Through continuous iteration optimization during the whole training process, the default optimization is achieved when the two sets of data cannot be distinguished. The basic idea of the GAN comes from the zero-sum game theory. The two networks against each other, in which the discriminator aims to distinguish the real distribution and the generative distribution as far as possible, while the generator aims to make the generated data consistent with the real data in the feature space and make the discriminator indistinguishable, finally the whole model reaches the optimum. The mathematical model is expressed as formula equation 1: Where x represents real images, z represents noise to the generator, and G(Z) represents the generated images, D(x) represents the probability of judging whether a real image is true. GAN is used to directly fit the distribution of n-dimensional vectors in images, and sample with random noise, the whole training process is in an unsupervised state, so the direction of data generation is uncontrollable unless the process traverses all initial distributions. At the same time, the generated images may not be the true expression of image meaning due to the excessive pursuit of quality. Therefore, when using the adversarial model to generate positron images in the industrial nondestructive testing, we choose to add a prior to restrict the data generation, so that the generation model can be trained for specific areas. Based on the scarcity of industrial positron image data, in this paper, we introduce the knowledge of migration learning and use medical images as training data to construct an encoder, which is based on the variational auto-encoder. Firstly, we sample medical image data X to get a series of sample points x 1, x 2, x 3, x n, which makes all the sample data in X fit successfully and obtains a distribution p(x), is described as formula equation 2: In order to achieve this goal, the distribution fitting of data sample X is finally realized with the help of implicit variable Z. It is assumed that p(x) describes a probability distribution of X generated by Z, which satisfies the Gauss distribution. Therefore, the whole encoder can be expressed as sampling Z from the standard normal distribution. In the process, we can get Variance and Mean of Sample Data X Sampled by Neural Network. The clustering process can be parametrized as formula equation 3: Where the mean and variance of the normal distribution which is exclusive to X k can be obtained. Then Z k can be sampled from this exclusive distribution. In order to obtain a more suitable generation model for positron image in the industrial field, we proposes an image feature memory module based on attention mechanism, which is used to extract domain image features. This network is a research focus of this chapter. The basic flow of the whole network is as follows:1) use neural network to extract the feature of the rare positron images, and obtain the images feature vectors. 2) combine the vector and the hidden variable get in Section 3.2 based on attention mechanism to obtain an image memory model. 3)use the memory model as the input of adversarial nets and train jointly with the whole network to obtain positron image generator for the industrial field. Positron image feature extraction We use the principal component analysis (H et al., 2015) is used to extract the positron sample data and the vector space transformation is used to reduce the dimensionality of higher dimensional positron data. Firstly, transform the original data into a new coordinate system by projection according to the new coordinate vector. Secondly, the variance of the first principal component of the projection data in the new coordinate system is the largest. As the dimension increases, the variance decreases in turn and the dimension decreases. is described as formula equation 4: The data matrix Y is de-averaged so that the mean value of each dimension is 0. Then we should find the most important feature vectors in the images, that is, the data on the coordinate axis represented by the feature fluctuates the most and the sum of squares of all samples projected on unit vector µ is the largest. Then we get the value of µ using Lagrange theorem. The mathematical expression is as follows equation 5: The nets use convolution neural network(CNN) to construct image feature extraction network, the network structure is divided into three layers, namely two convolution layers and one non-linear output layer. Firstly, small image slices are extracted from sample images and the dimension of the slices is the same as convolution core. Then traverse all the pixels in them and perform twolevel convolution operation. Finally, hashing operation and histogram statistics are carried out in the output layer to print the feature vector. Memory Module Based on Attention Mechanism The obtained positron eigenvectorŶ is fused with the hidden variable Z of the medical image obtained based on the attention mechanism to get the input nets. The purpose is to make the prior knowledge contained in the nets more focused on positron features, so that the features of scarce data can be more applied in the whole training process. The basic idea is the global attention and the focus in our model is to extract all positron image features. The specific realization is to align image data vectors, directly use medical images as query vectors, and input positron image feature vectors as hidden state to calculate their weights equation 6. Where z t is the medical image distribution, y s are feature vectors extracted from positron images, and score (z t, y s) is the scoring criterion for the operation. We get a constant and normalize it. And the contribution degree of each feature of positron image to the network can be obtained. So the image feature can be fused according to the weight ratio. Finally, the vector containing prior knowledge in the field is obtained as overall input of adversarial nets. Generative model The generative network is constructed based on DenseNet (G et al., 2017), and the positron image features can be requisitioned repeatedly in the model. The network can also strengthen the contribution of the characteristics of scarce data so that the generated images closer to real industrial positron images in detail. The generative model is as follows: the output of the memory model in chapter 3.3 as a whole input to the net, and the input of each layer is related to the output of all the previous layers, not only related to upper layer. It can be expressed as formula equation 7: X 0, X 1, · · ·, X (l−1) is the concatenation to the net. We can group all output feature maps from layer X 0 to X (l−1) according to different channels and the structure is used to reduce the parameters without losing features randomly, so that the initial input can enter each layers convolution calculation to realize the feature reuse. The basic structure is 33 convolution layer, Batch Normalization (S & C, 2015) and ReLU non-linear activation layer. Feature maps of all previous layers need to be cat in the network. In order to perform the down sampling operation, the net is divided into several Denseblocks and transition layers are used between different them. Referring to the original network, it consists of Batch Normalization layer, 1*1 convolution network and 2*2 average-pooling. In the same Denseblock, the state of each layer is associated with all previous layers, and the training of each layer is aimed at the global state feedback of the network to update the parameters. Discriminative model The discriminative net is used to discriminate specific images in a specific domain, in which domain image features can be used as the evaluation criteria for network classification as much as possible. The net uses Markov Model based on PatchGAN, which is composed of full convolution layers. The output is an n-dimensional matrix, and the mean of the matrix is used as the output of the discriminative network, so that each receptive field in the image can be judged, which is equivalent to the convolution discriminant in batches by layers, and finally fed back to the whole network. The calculation of receptive field is equation 8: In the model, the real input samples are medical data sets. Therefore, in order to make the generated data better characterize positron image features, we need to add an additional attention perception loss function to the net. The loss function of the whole net consists of two parts: L G AN and L A P G, the attention loss function L A P G is used to characterize the feature contribution of positron image, which is realized to measure the distribution distance between the generated data and the positron images. The loss function is described as equation 9: W i represents number of elements in each layer, and s is the number of layers. The loss function of the whole net can be described as equation 10, and the L G AN is similar as the original GAN. 4 EXPERIMENTAL We design the model firstly by using encoder to obtain the medical images hidden vectors and using principal component analysis to reduce positron images dimensionality and extract the main feature. Train memory module and adversarial nets jointly, and in the process of backpropagation, the identification network updates the parameters of the front-end network, so that the feature extraction network extracts the features repeatedly until the whole network achieves the optimal model. Finally, the positron image generator for industrial non-destructive testing is obtained. The discriminator refers to pixel and each batch is 70*70.The learning rate is 0.0002 in the whole net. The model are trained iteratively using Adam algorithm(=0.5). DeepLession The dataset (K et al., 2018) contains more than 32,000 lesions from more than 10,000 case studies. The data is developed by National Institutes of Health Clinical Center(NIHCC), and developed by mining historical medical data from image archives and communication systems. It is now maybe the largest set of CT medical image data available to all. In the experiment, 150,000 images were selected for our training and the image pixels are 256*256. Positron images The dataset is obtained by the Geant4 Application for Tomographic Emission(GATE). GATE is a simulation software which can simulate the physical process of PET imaging system, such as geometric size, material composition, detector movement and so on. In the model design, the anisotropic tube made of aluminium metal is filled with positron nuclide solution. Its activity is 600 Bq, the number of detectors is 184 *64, the sampling time is 0.1 s, the energy resolution is 15 percent, the time resolution is 300 ps, the energy window is 350-650 keV, and the time window is 10 ns. The design sampling time is 0.1s to meet the needs of rapid sampling in In this section, we compare our approach with the commonly used generation model, aiming at the generation of industrial positron images. Quantitative are presented in Table 1. The metrics to evaluate the performance of our method, namely multi-scale structural similarity(MS-SSIM) (A et al., 2017) and Frchet Inception Distance (FID) (M et al., 2017). The SSIM is used to measure the similarity of two images and FID is to measure the Frchet distance of two distributions in feature space. By comparing the experimental data, we can clearly see that the confrontation network constructed in this paper has a better effect on the generation of positron images for professional fields, and the generated images are closer to the real images. In this paper, we introduce an application of GAN in the field of nondestructive testing for specific industries. We combine the knowledge of transfer learning to make up the problem of insufficient data. The key point is to introduce attention mechanism to construct a positron image feature memory module, which can reuse image features under the condition of scarce data. At the same time, an attention loss function is added to the discriminative net to further improve the generator performance. Experiments show that compared with the start-of-the-art generation methods in deep learning, the model in our paper has an obvious improvement in the quality of industrial positron image generation. In the future, our focus is to further study the application of generative adversarial networks in industrial positron image processing, and to further improve the quality of domain images.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkxcSpEKPS
adversarial nets, attention mechanism, positron images, data scarcity
We revisit the Recurrent Attention Model , a recurrent neural network for visual attention, from an active information sampling perspective. We borrow ideas from neuroscience research on the role of active information sampling in the context of visual attention and gaze , where the author suggested three types of motives for active information sampling strategies. We find the original RAM model only implements one of them. We identify three key weakness of the original RAM and provide a simple solution by adding two extra terms on the objective function. The modified RAM 1) achieves faster convergence, 2) allows dynamic decision making per sample without loss of accuracy, and 3) generalizes much better on longer sequence of glimpses which is not trained for, compared with the original RAM. We revisit the Recurrent Attention Model (RAM,), a recurrent neural network for visual attention, from an active information sampling perspective. The RAM, instead of processing the input image for classification in full, only takes a glimpse at a small patch of the image at a time. The recurrent attention mechanism learns where to look at to obtain new information based on the internal state of the network. After a pre-defined number of glimpses, RAM finally makes a prediction as output. Compared with the attention mechanism which now dominates the AI/NLP research such as Transformer and BERT , this recurrent attention mechanism is fundamentally different, as it is used to obtain new information (active sampling of information), rather than processing information that is already fully observed. In this paper, we identify three weaknesses of this widely-cited approach. First, the convergence of RAM training is slow. Second, RAM does not support dynamic number of glimpses per sample, but uses a fixed number of glimpses for every sample. Third and perhaps most importantly, the performance of the original RAM does not improve but rather decrease dramatically if it takes more glimpses, which is weird and against intuition. We provide a simple solution of adding two extra terms in the objective function of RAM, insipred from neuroscience research which discusses the logic and neural substrates of information sampling policies in the context of visual attention and gaze. Base on the evidence available so far, suggested three kinds of motives for the active sampling strategies of decision-making while the original RAM only implements one of them. We incorporate the other two motives in the objective function, and by doing so we 1) achieve much faster convergence and 2) instantly enbale decision making with a dynamic number of glimpse for different samples with no loss of accuracy. 3) More importantly, we find that the modified RAM generalizes much better to longer sequence of glimpses which is not trained for. Recurrent Attention Model. RAM combines a recurrent neural network with reinforcement learning to utilize visual attention for classification and is trained with policy gradient in a end-to-end manner. At each timestep, it takes a glimpse on a patch of the input image, processes the information and then uses its internal state to select the next location to focus. By adaptively selecting a sequence of patches of images and only processing the selected patches, it outperforms a convolutional neural network baseline while its computation cost is cheap and independent of the input image size. Due to page limit, we refer readers to the original paper, online tutorials and high-starred implementation on Github. The recurrent attention mechanism allows RAM to actively obtain new information for decision making. Note that though RAM is trained end-to-end through the help of the REINFORCE rule , it is not a standard reinforcement learning problem where the objective is to maximize the cumulative reward (or, minimize the cumulative regret); instead, RAM maximizes the simple reward (minimizes the simple regret). The reward is given only to the final prediction action, but not to the glimpse actions. In a sense, a glimpse action is rewarded indirectly only if it obtains useful information that helps making a correct prediction. This is very similar to the problem of pure-exploration bandits , specifically best arm identification. We think this distinguishing feature of active sampling of information for decision making is not well addressed in deep learning, in reinforcement learning, as well as in neuroscience research . Weaknesses of Recurrent Attention Model. First, the original RAM paper did not mention the exact convergence of RAM training. We inspect some Github implementations of RAM, one implementation (conan7882, 2018) takes 1000 epochs to achieve the paper accuracy; another (kevinzakka, 2017) claims to reach the paper accuracy in 30 epochs, but we find that it uses a technique to augment the test time accuracy, introduced in the followed work of RAM, by processing an input multiple times and averaging the prediction. When we disable this technique in (kevinzakka, 2017), we find that the convergence is in fact much slower. Second, RAM uses a fixed number of glimpses for each input sample, however we would like to see RAM terminates automatically as long as it is very certain about the prediction. It is possible that some samples are easier to predict and only need fewer glimpses. In the future work section of, the authors briefly mentioned that they trained a separate network to predict when to terminate. By this way RAM learns to do dynamic prediction "once it has enough information to make a confident classification". Third, RAM does not generalize well when using a larger number of glimpses. This generalization issue comes in two different settings. • When the number of glimpses is the same for training and testing, which is the experiment setting in the original RAM paper. The authors showed that RAM performs the best in MNIST trained with N = 6 glimpses per sample and more glimpses begin to weaken the accuracy. • When the number of glimpses is not the same for training and testing. We set the number of glimpses to N = 6 to train the RAM and test the performance on N = 6. We find that the accuracy does not improve but decrease dramatically (see Figure 2). In this paper we evaluate the second setting. Implications from . In , the authors reviewed recent studies on attentional learning, and highlighted the role of active sampling through visual attention and gaze for decision making. suggested three types of motives for implementing active sampling policies, " One motive relates simply to the extent to which information is expected to increase the operant rewards of a task; a second motivation is related to reducing the uncertainty of belief states; and yet a third motive may be related to the intrinsic utility or dis-utility of anticipating a positive or a negative outcome (savoring or dread). " In the context of RAM for classification, only the first motive is implemented by RAM as the reward of correct classification. We propose to add two extra terms which represents the second and third motives mentioned above, on the original objective function of RAM. We denote the MNIST dataset of in total M sample-label pairs as {X, Y} = {x, y} M; the number of class in Y is K = 10. the parameter of the RAM architecture is denoted as θ, the output K-dimension softmax vector of RAM as f θ (x), and the prediction of classification aŝ y = argmax i f θ (x) i. The original objective function. Under the distribution of all possible sequences s 1:N of actions (at length N), RAM maximizes the reward of correct classification as the objective function: J original (θ) = E p(s 1:N ;θ) [1ŷ =y] where p(s 1:N ; θ) depends on the policy. employed two tricks to maximize J original (θ) which is non-trival. First, borrowed the REINFORCE rule to maximize J original (θ) through policy gradient. Second, because the policy gradient may have a high variance, trained an auxiliary network (one fully-connected layer) called baseline network taking the internal state as input to mimic the value function, which helps the training. So the true objective function the orignal RAM maximizes is J original + J auxiliary. New objective function. We consider the new objective function to be We use J uncertainty to model the motive to reduce uncertainty. However, since the uncertainty is not available, we use a similar network like the baseline network to mimic the residual of error. In other words, we use another auxiliary network to predict the different between the classification and the ground truth, taking the internal state of RAM and the current prediction f θ (x) as input. We call this predicted diffrence as self-uncertainty, denoted as u(x), which is also a K-dimensional vector. Since we wish to minimize this uncertainty, we have J uncertainty = −λ 1 i |u(x) i | We use J intrinsic to model the intrinsic utility of anticipating outcome. The intrinsic utility of anticipating outcome is here interpreted as the intrinsic preference over certain prediction outcomes. By intrinsic, it means this preference is inherently built-in regardless of the hidden state or the gathered information and should make RAM prefer to predict a certain class if there is no enough information. As MNIST is class-balanced and no class is more important than the others, we choose to enforce this preference equally to all the classes, i.e. all the possible prediction outcomes. Technically, the intrinsic preference should encourage the output softmax vector f θ (x) to be close to a prior distribution, which is simply uniform. We incorporate this intrinsic belief as the cross entropy to a uniform distribution on the output K-dimension vector f θ (x): This should regularize RAM to not make any over-confident prediction if there is no enougth information. This should be helpful, especially in the early stage of training, when the current active information sampling policy is not so good so that no useful information is gathered. Note that for simplicity, we merge the objective of the self-uncertainty network and the objective of the baseline network into J auxiliary. The entire modified RAM could be trained end-to-end just as the orignal one. How to do dynamic decision-making in test-time. The introduction of self-uncertainty has a side-effect. It gives us an opportunity to enable a dynamic number of glimpses for diffrence samples because now we have an uncertainty measure. Borrowing ideas from pure-exploration bandits, we use self-uncertainty to construct a upper and lower'bound' for each class, and let RAM terminate if the lower bound of the highest class is higher than the upper bound of the rest classes. Given a timestep t, denote the prediction as f t θ (x) and the self-uncertainty as u t (x), the upper and lower bound for a class i is where the exploration rate β is a hyperparameter. We take i * to be the class of the highest i * = argmax i f t θ (x) i. RAM terminates when the following condition is met: Lower(i *) > max i =i * Upper(i). Given a larger β, RAM will take more glimpses and when β = 0, RAM will terminate in only one glimpses. We evaluate on MNIST dataset as in the orignal RAM paper. We set the train-time number of glimpses N = 6 for it achieves the best test-time accuracy in. Implementation details see the source code 1. We first show in Figure 1 that the two new terms in the objective both contribute to a faster convergence. We test four cases 1) the orignal objective, 2) add the J intrinsic, 3) add J uncertainty, 4) add both new terms. We see in Figure 1 that both of our new objective in isolation help a faster learning and together give the fastest convergence. As in Figure 2, we test the trained models with varying number of glimpses. (We want to emphasize that the focus is not the absolute performance, but rather the generalization on more glimpses than train time.) We fisrt evaluate the non-dynamic case (fixed number for all samples). The performance of the original RAM decrease dramatically when N > 10. Adding both terms, the modified RAM does not suffer the decrease anymore even when N is large. Also, it is interesting that adding only the uncertainty term, we observe the improvement is very slight and the intrinsic term effectively stablizes the prediction accuracy given more glimpses. We also test the dynamic case by varying the exploration rate. We see that dynamic number of glimpses does not hurt the performance very much, which confirms with the hypothesis that some samples are easier to discriminate and thus need fewer glimpses. One may argue that the given longer training time or other hyperparameter tuning, RAM will eventually reach a point where it can give stable prediction accuracy on more glimpses, and the new objective only make it converge faster to that point. But during our experiments, we find with λ 2 = 0.1 the J intrinsic term can effectively stablize the prediction given more glimpses, even when trained for only 1 epoch. We observe that the l2-norm of internal states of orginal RAM becomes very large given a longer sequence of glimpses while the modified RAM with J intrinsic remains stable.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJlVEQt8Lr
Inspired by neuroscience research, solve three key weakness of the widely-cited recurrent attention model by simply adding two terms on the objective function.
Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data. However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs. Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. In this paper, we present the investigation on active learning with GNNs for node classification tasks. Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning. With a theoretical bound analysis we justify the design choice of our approach. In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly. Graph Neural Networks (GNN) (; Veličković et al., 2017; ;) have been widely applied in many supervised and semi-supervised learning scenarios such as node classifications, edge predictions and graph classifications over the past few years. Though GNN frameworks are effective at fusing both the feature representations of nodes and the connectivity information, people are longing for enhancing the learning efficiency of such frameworks using limited annotated nodes. This property is in constant need as the budget for labeling is usually far less than the total number of nodes. For example, in biological problems where a graph represents the chemical structure of a certain drug assembled through atoms, it is not easy to obtain a detailed analysis of the function for each atom since getting expert labeling advice is very expensive. On the other hand, people can carefully design a small "seeding pool" so that by selecting "representative" nodes or atoms as the training set, a GNN can be trained to get an automatic estimation of the functions for all the remaining unlabeled ones. Active Learning (AL) (; Bodó et al., 2011), following this lead, provides solutions that select "informative" examples as the initial training set. While people have proposed various methods for active learning on graphs (; ; ;), active learning for GNN has received relatively few attention in this area. and are two major works that study active learning for GNN. The two papers both use three kinds of metrics to evaluate the training samples, namely uncertainty, information density, and graph centrality. The first two metrics make use of the GNN representations learnt using both node features and the graph; while they might be reasonable with a good (well-trained) GNN model, the metrics are not informative when the label budget is limited and/or the network weights are under-trained so that the learned representation is not good. On the other hand, graph centrality ignores the node features and might not get the real informative nodes. Further, methods proposed in; only combine the scores using simple linear weighted-sum, which do not solve these problems principally. We propose a method specifically designed for GNN that naturally avoids the problems of methods above 1. Our method select the nodes based on node features propagated through the graph structure, 1 Our code will be released upon acceptance. making it less sensitive to inaccuracies of representation learnt by under-trained models. Then we cluster the nodes using K-Medoids clustering; K-Medoids is similar to the conventional K-Means, but constrains the centers to be real nodes in the graph. Theoretical and practical experiments prove the strength of our algorithm. • We perform a theoretical analysis for our method and study the relation between its classification loss and the geometry of the propagated node features. • We show the advantage of our method over Coreset by comparing the bounds. We also conjecture that similar bounds are not achievable if we use raw unpropagated node features. • We compare our method with several AL methods and obtain the best performance over all benchmark datasets. Active Learning (AL) aims at interactively choosing data points from the training pool to maximize model performances, and has been widely studied both in theory and practice . proposes to compute a Coreset over the last-layer activation of a convolutional neural network. The method is designed for general-purpose neural networks, and does not take the graph structure into account. Early works on AL with graph-structured data study non-parametric classification models with graph regularization. More recent works analyze active sampling under the graph signal processing framework . However, most of these works have focused on the denoising setting where the signal is smooth over the graphs and labels are noisy versions of node features. Similarly, optimal experimental design can also apply to graph data but primarily deals with linear regression problems, instead of nonlinear classification with discrete labels. Graph Neural Networks (GNNs) (; Veličković et al., 2017;) are the emerging frameworks in the recent years when people try to model graph-structured data. Most of the GNN variants follow a multi-layer paradigm. In each layer, the network performs a message passing scheme, so that the feature representation of a node in the next layer could be some neighborhood aggregation from its previous layer. The final feature of a single node thus comprises of the information from a multi-hop neighborhood, and is usually universal and "informative" to be used for multiple tasks. Recent works show the effectiveness of using GNNs in the AL setting. , for instance, proposes to linearly combine uncertainty, graph centrality and information density scores and obtains the optimal performance. further improves the by using learnable combination of weights with multi-armed bandit techniques. Instead of combining different metrics, in this paper, we approach the problem by clustering propagated node features. We show that our one-step active design outperforms existing methods based on learnt network represenations, in the small label setting, while not degrading in performance for larger amounts of labeled data. In this section, we describe a formal definition for the problem of graph-based active learning under the node classification setting and introduce a uniform set of notations for the rest of the paper. We are given a large graph G = (V, E), where each node v ∈ V is associated with a feature vector x v ∈ X ⊆ R d, and a label y v ∈ Y = {1, 2, ..., C}. Let V = {1, 2, ..., n}, we denote the input features as a matrix X ∈ R n×d, where each row represents a node, and the labels as a vector Y = (y 1, ..., y n). We also consider a loss function l(M|G, X, Y) that computes the loss over the inputs (G, X, Y) for a model M that maps G, X to a prediction vectorŶ ∈ Y n. Following previous works on GNN , we consider the inductive learning setting; i.e., a small part of Y is revealed to the algorithm, and we wish to minimize the loss on the whole graph l(M|G, X, Y). Specifically, an active learning algorithm A is initially given the graph G and feature matrix X. In step t of operation, it selects a subset s t ⊆ [n] = {1, 2, ..., n}, and obtains y i for every i ∈ s t. We assume y i is drawn randomly according to a distribution P y|xi supported on Y; we use η c (v) = Pr[y = c|v] to denote the probability that y = c given node v, and T. Then A uses G, X and y i for i ∈ s 0 ∪ s 1 ∪ · · · ∪ s t as the training set to train a model, using training algorithm M. The trained model is denoted as M At. If M is the same for all active learning strategies, we can slightly abuse the notation A t = M At to emphasize the focus of active learning algorithms. A general goal of active learning is then to minimize the loss under a given budget b: min where the randomness is over the random choices of Y and A. We focus on M being the Graph Neural Networks and their variants elaborated in detail in the following part. Graph Neural Networks define a multi-layer feature propagation process similar to Multi-Layer Perceptrons (MLPs). Denote the k-th layer representation matrix of all nodes as X (k), and X ∈ R n×d are the input node features. Graph Neural Networks (GNNs) differ in their ways of defining the recursive function f for the next-layer representation: where Θ k is the parameter for the k-th layer. Naturally, the input X satisfies X = X by definition. Graph Convolution Network (GCN). A GCN has a specific form of the function f as: where ReLU is the element-wise rectified-linear unit activation function , Θ k is the parameter matrix used for transforming the size of feature representations to a different dimension and S is the normalized adjacency matrix. Specifically, S is defined as: where A is the original adjacency matrix associated with graph G and D is the diagonal degree matrix of A. Intuitively, this operation updates node embeddings by the aggregation of their neighbors. The added identity matrix I (equivalent to adding self-loops to G) acts in a similar spirit to the residual links in MLPs that bypasses shallow-layer representations to deep layers. By applying this operation in a multi-layer fashion, a GCN encourages nodes that are locally related to share similar deep-layer embeddings and prediction thereafter. For the classification task, it is normal to stack a linear transformation along with a softmax function to the representation in the final layer, so that each class could have a prediction score. That is, where softmax(x) = exp(x)/ C c=1 exp(x c) which makes the prediction scores have unit sum of 1 for all classes, and K is the total number of layers. We use the GCN structure as the fixed unified model M for all the following discussed AL strategies A. Traditionally, active learning algorithms choose one instance at a time for labeling, i.e., with |s t | = 1. However, for modern datasets where the numbers of training instances are very large, it would be extremely costly if we re-train the entire system each time when a new label is obtained. Hence we focus on the "batched" one-step active learning setting , and select the informative nodes once and for all when the algorithm starts. This is also called the optimal experimental design in the literature . Aiming to select the b most representative nodes as the batch, our target becomes: min The node selection algorithm is described in Section 4.1, followed by the loss bound analysis in Section 4.2, and the comparison with a closely related algorithm (K-Center in Coreset ) in Section 4.3. Input: Node representation matrix X, graph structure matrix G and budget We describe a generic active learning framework using distance-based clustering in Algorithm 1. It acts in two major steps: 1) computing a distance matrix or function d X,G using the node feature representations X and the graph structure G; 2) applying clustering with b centers over this distance matrix, and from each cluster select the node closest to the center of the cluster. After receiving the labels (given by matrix Y) of the selected nodes, we train a graph neural network, specifically GCN, based on X, G and Y for the node classification task. Generally speaking, different options for the two steps above would yield different performance in the down-stream prediction tasks; we detail and justify our choices below and in subsequent sections. Distance Function. Previous methods (; ;) commonly use network representations to compute the distance, i.e., for some specific k. While this can be helpful in a well-trained network, the representations are quite inaccurate in initial stages of training and such distance function might not select the representatitive nodes. Differently, we define the pairwise node distance using the L 2 norm of the difference between the corresponding propagated node features: where (M) i denotes the i-th row of matrix M, and recall that K is the total number of layers. Intuitively, this removes the effect of untrained parameters on the distance, while still taking the graph structure into account. Clustering Method. Two commonly used methods are K-Means and K-Center 2. We propose to apply the K-Medoids clustering. K-Medoids problem is similar to K-Means, but the center it selects must be real sample nodes from the dataset. This is critical for active learning, since we cannot try to label the unreal cluster centers produced by K-Means. Also, we show in Section 4.3 that K-Medoids can obtain a more favorable loss bound than K-Center. We call our method FeatProp, to emphasize the active learning strategy via node feature propagation over the input graph, which is the major difference from other node selection methods. Recall that we use (S K X) i − (S K X) j 2 to approximate the pairwise distances between the hidden representations of nodes in GCN. Intuitively, representation S K X resembles the output of a simplified GCN by dropping all activation functions and layer-related parameters in the original structure, which introduces a strong inductive bias. In other words, the selected nodes could possibly contribute to the stabilization of model parameters during the training phase of GCN. The following theorem formally shows that using K-Medoids with propagated features can lead to a low classification loss: Theorem 1 (informal). Suppose that the label vector Y is sampled independently from the distribution y i ∼ η(i), and the loss function l is bounded by [−L, L]. Then under mild assumptions, there exists a constant c 0 such that with probability 1 − δ the expected classification loss of A t satisfies To understand Theorem 1, notice that the first term the target loss of K-Medoids (sum of point-center distances), and the second term quickly decays with n, where n is the total number of nodes in graph G. Therefore the classification loss of A 0 on the entire graph G is mostly dependent on the K-Medoids loss. In practice, we can utilize existing robust initialization algorithms such as Partitioning Around Medoids (PAM) to approximate the optimal solution for K-Medoids clustering. The assumptions we made in Theorem 1 are pretty standard in the literature, and we illustrate the details in the appendix. While our share some common characteristics with Sener et al. , our proof is more involved in the sense that it relates to the translated features (S K X) i − (S K X) j 2 instead of the raw features (X) i − (X) j 2. In fact, we conjecture that using raw feature clustering selection for GCN will not in a similar bound as in: this is because GCN uses the matrix S to diffuse the raw features across all nodes in V, and the final predictions of node i will also depend on its neighbors as well as the raw feature (X) i. We could see a clearer comparison in practice in Section 5.2. Figure 1: Visualization of Theorem 1. Consider the set of selected points s and the remaining points in the dataset [n]\s. K-Medoids corresponds to the mean of all red segments in the figure, whereas K-Center corresponds to the max of all red segments in the figure. In this subsection we provide justifications on using the K-Medoids clustering method as opposed to Coreset . The Coreset approach aims to find a δ-cover of the training set. In the context of using propagated features, this means solving We can show a similar theorem as Theorem 1 for the Coreset approach: Theorem 2. Under the same assumptions as in Theorem 1, with probability 1 − δ the expected classification loss of A t satisfies It is easy to see that RHS of Eqn. is smaller than RHS of Eqn., since In other words, K-Medoids can obtain a better bound than the K-Center method (see Figure 1 for a graphical illustration). We observe superior performance of K-Medoid clustering over K-Center clustering in our experiments as well (see Section 5.2). We evaluate the node classification performance of our selection method on the Cora, Citeseer, and PubMed network datasets . We further supplement our experiment with an even denser network dataset CoraFull (Bojchevski & Günnemann, 2017) to illustrate the performance differences of the comparing approaches on a large-scale setting. We evaluate the Macro-F1 of the methods over the full set of nodes. The sizes of the budgets are fixed for all benchmark datasets. Specifically, we choose to select 10, 20, 40, 80 and 160 nodes as the budget sizes. After selecting the nodes, a two-layer GCN 3, with 16 hidden neurons, is trained as the prediction model. We use the Adam optimizer with a learning rate of 0.01 and weight decay of 5 × 10 −4. All the other hyperparameters are kept as in the default setting (β 1 = 0.9, β 2 = 0.999). To guarantee the convergence of the GCN, the model trained after 200 epochs is used to evaluate the metric on the whole set. We compared the following methods: • Random: Choosing the nodes uniformly from the whole vertex set. • Degree: Choosing the nodes with the largest degrees. Note that this method does not consider the information of node features. • Uncertainty: Similar to the methods in , we put the nodes with maxentropy into the pool of instances. • Coreset : This method performs a K-Center clustering over the last hidden representations in the network. If time allows (on Cora and Citeseer), a robust mixture integer programming method as in (dubbed CoresetMIP) is adopted. We also apply a time-efficient approximation version (Coreset-greedy) for all of the datasets. The center nodes are then selected into the pool. • AGE : This method linearly combines three metrics -graph centrality, information density, and uncertainty and select nodes with the highest scores. • ANRMAB : This method enhances AGE by learning the combination weights of metrics through an exponential multi-arm-bandit updating rule. • FeatProp: This is our method. We perform a K-Medoids clustering to the propogated features (Eqn.), where X is the input node features. In the experiment, we adopts an efficient approximated K-Medoids algorithm which performs K-Means until convergence and select nodes cloesest to centers into the pool. In our experiments, we start with a small set of nodes (5 nodes) sampled uniformly at random from the dataset as the initial pool. We run all experiments with 5 different random seeds and report the averaged classification accuracy as the metric. We plot the accuracy vs the number of labeled points. For approaches (Uncertainty, Coreset, AGE and ANRMAB) that require the current status/hidden representations from the classification model, a fully-trained model built from the previous budget pool is returned. For example, if the current budget is 40, the model trained from 20 examples selected by the same AL method is used. Figure 2, our method outperforms all the other baseline methods in most of the compared settings. It is noticeable that AGE and ANRMAB which use uncertainty score as their sub-component can achieve better performances than Uncertainty and are the second best methods in most of the cases. We also show an averaged Macro-F1 with standard deviation across different number of labeled nodes in Table 3. It is interesting to find that our method has the second smallest standard deviation (Degree is deterministic in terms of node selection and the variance only comes from the training process) among all methods. We conjecture that this is due to the fact that other methods building upon uncertainty may suffer from highly variant model parameters at the beginning phase with very limited labeled nodes. Efficiency. We also compare the time expenses between our method and Coreset, which also involves a clustering sub-routine (K-Center), in Table 2. It is noticeable that in order to make Coreset more stable, CoresetMIP uses an extreme excess of time comparing to Coreset-greedy in the same setting. An interesting fact we could observe in Figure 2 is that CoresetMIP and Coreset-greedy do not have too much performance difference on Citeseer, and Coreset-greedy is even better than CoresetMIP on Cora. This is quite different from the in image classification tasks with CNNs . This phenomenon distinguishes the difference between graph node classification with traditional classification problems. We conjecture that this is partially due to the fact that the nodes no longer preserve independent embeddings after the GCN structure, which makes the original analysis of Coreset not applicable. Ablation study. It is crucial to select the proper distance function and clustering subroutine for FeatProp (Line 1 and Line 2 in Algorithm 1). As is discussed in Section 4.3, we test the differences with the variant of using the L2 distance from the final layer of GCN as the distance function and the one by setting K-Medoids choice with a K-Center replacement. We compare these algorithms in Figure 3. As is demonstrated in the figure, the K-Center version (blue line) has a lower accuracy than the original FeatProp approach. This observation is compatible with our analysis in Section 4.3 as K-Medoids comes with a tighter bound than K-Center in terms of the classification loss. Furthermore, as final layer representations are very sensitive to the small budget case, we observe that the network representation version (orange line) also generally shows a much deteriorated performance at the beginning stage. Though FeatProp is tailored for GCNs, we could also test the effectiveness of our algorithm over other GNN frameworks. Specifically, we compare the methods over a Simplified Graph Convolution (SGC) and obtain similar observations. Due to the space limit, we put the detailed in the appendix. We study the active learning problem in the node classification task for Graph Convolution Networks (GCNs). We propose a propagated node feature selection approach (FeatProp) to comply with the specific structure of GCNs and give a theoretical characterizing the relation between its classification loss and the geometry of the propagated node features. Our empirical experiments also show that FeatProp outperforms the state-of-the-art AL methods consistently on most benchmark datasets. Note that FeatProp only focuses on sampling representative points in a meaningful (graph) representation, while uncertainty-based methods select the active nodes from a different criterion guided by labels, how to combine that category of methods with FeatProp in a principled way remains an open and yet interesting problem for us to explore. We also evaluate the methods using the metric of Micro-F1 in Table 4 C be the prediction for node i under input G, X, and (M) i,c be the c-th element of (M) i (i.e., the prediction for class c). In order to show Theorem 1, we make the following assumptions: Assumption 1. We assume A 0 overfits to the training data. Specifically, we assume the following two conditions: i) A 0 has zero training loss on s 0; ii) for any unlabeled data (x i, x j) with i ∈ s 0 and j ∈ s 0, we have (A 0) i,yj ≤ (A 0) j,yj and (A 0) i,c ≥ (A 0) j,c for all c = y j. The second condition states that A 0 achieves a high confidence on trained samples and low confidence on unseen samples. We also assume that the class probabilities are given by a ground truth GCN; i.e., there exists a GCN M * that predicts Pr[Y i = c] on the entire training set. This is a common assumption in the literature, and shows that gradient descent provably achieves zero training loss and a precise prediction in polynomial time. Assumption 2. We assume l is Lipschitz with constant λ and bounded in [−L, L]. The loss function is naturally Lipschitz for many common loss functions such as hinge loss, mean squared error, and cross-entropy if the model output is bounded. This assumption is widely used in DL theory (e.g., ;). Assumption 3. We assume that there exists a constant α such that the sum of input weights of every neuron is less than α. Namely, we assume i |(Θ K) i,j | ≤ α. This assumption is also present in . We note that one can make i |(Θ K) i,j | arbitrarily small without changing the network prediction; this is because dividing all input weights by a constant t will also divide the output by a constant t. Assumption 4. We assume that ReLU function activates with probability 1/2. This is a common assumption in analyzing the loss surface of neural networks, and is also used in (; ;). This assumption also aligns with observations in practice that usually half of all the ReLU neurons can activate. With these assumptions in place, we are able to prove Theorem 1. Theorem 1 (restated). Suppose Assumptions 1-4 hold, and the label vector Y is sampled independently from the distribution y v ∼ η(v) for every v ∈ V. Then with probability 1 − δ the expected classification loss of A t satisfies Proof. Fix y j for j ∈ s 0 and therefore the ing model A 0. Let i ∈ V \ s 0 be any node and j ∈ s 0. We have For the first term we have The last inequality holds from the Lipschitz continuity of l. Now from Assumption 1, we have otherwise. Now taking the expection w.r.t the randomness in ReLU we have Here E σ represents taking the expectation w.r.t ReLU. Now for we have The inequality follows from. Now for the second loss in we use the property that M * computes the ground truth: We now use the fact that ReLU activates with probability 1/2, and compute the expectation: Here E σ means that we compute the expectation w.r.t randomness in σ (ReLU) in M *. The last inequality follows from definition of α, and that l ∈ [−L, L]. Combining the two parts to and let j = argmin (S K X) i − (S K X) j, we obtain Consider the following process: we first get G, X (fixed data) as input, which induces η(i) for i ∈ [n]. Note that M * gives the ground truth η(i) for every i so distributions η(i) ≡ η X,G (i) are fixed once we obtain G, X 4. Then the algorithm A choose the set s 0 to label. After that, we randomly sample y j ∼ η(j) for j ∈ s 0 and use the labels to train model A 0. At last, we randomly sample y i ∼ η(i) and obtain loss l(A 0 |G, X, Y). Note that the sampling of all y i for i ∈ V \ s 0 is after we fix the model A 0, and knowing exact values of y j for j ∈ s 0 does not give any information of y i (since η(i) is only determined by G, X). Now we use Hoeffding's inequality (Theorem 3) with Z i = l((A 0) i, y i ); we have −L ≤ Z i ≤ L by our assumption, and recall that |V \ s 0 | = n − b. Let δ be the RHS of, we have that with probability 1 − δ, Now plug in, multiply both sides by (n − b) and rearrange. We obtain that i∈V \s 0 Now note that since the random draws of y i is completely irrelevant with training of A 0, we can also sample y i together with y j for j ∈ s 0 after receiving G, X and before the training of A 0 (A does not have access to the labels anyway). So holds for the random drawings of all y's. Now divide both sides of by n and use, we have The same proof as Theorem 1 applies for Theorem 2 using the max of distances instead of averaging. We therefore omit the details here. We attach the Hoeffding's inequality here for the completeness of our paper. 4 To make a rigorous argument, we get the activation of M * in this step, meaning that we pass through the randomness of σ in M *.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HylwpREtDr
This paper introduces a clustering-based active learning algorithm on graphs.
Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation. However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data. In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information. Since the partitioning strategy (slightly) increases the number of function evaluations (NFEs), InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation (ODE) solvers for better speed and performance. We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10. Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance. Invertible models are attractive modelling choice in a range of downstream tasks that require accurate densities including anomaly detection and model-based reinforcement learning . These models enable exact latent-variable inference and likelihood estimation. A popular class of invertible models is the flow-based generative models (; ; ;) that employ a change of variables to transform a simple distribution into more complicated ones while preserving the invertibility and exact likelihood estimation. However, computing the likelihood in flow-based models is expensive and usually requires restrictive constraints on the architecture in order to reduce the cost of computation. Recently, introduced a new type of invertible model, named the Continuous Normalizing Flow (CNF), which employs ordinary differential equations (ODEs) to transform between the latent variables and the data. The use of continuous-time transformations in CNF, instead of the discrete ones, together with efficient numerical methods such as the Hutchinson's trace estimator , helps reduce the cost of determinant computation from O(d 3) to O(d), where d is the latent dimension. This improvement opens up opportunities to scale up invertible models to complex tasks on larger datasets where invertibility and exact inference have advantages. Until recently, CNF has mostly been trained using unlabeled data. In order to take full advantage of the available labeled data, a conditioning method for CNF -which models the conditional likelihood, as well as the posterior, of the data and the labels -is needed. Existing approaches for conditioning flow-based models can be utilized, but we find that these methods often do not work well on CNF. This drawback is because popular conditioning methods for flow-based models, such as in , make use of the latent code for conditioning and introduce independent parameters for different class categories. However, in CNF, for invertibility, the dimension of the latent code needs to be the same as the dimension of the input data and therefore is substantial, which in many unnecessary parameters. These additional but redundant parameters increase the complexity of the model and hinder learning efficiency. Such overparametrization also has a negative impact on other flow-based generative models, as was pointed out by , but is especially bad in the case of CNF. This is because the ODE solvers in CNF are sensitive to the complexity of the model, and the number of function evaluations that the ODE solvers request in a single forward pass (NFEs) increases significantly as the complexity of the model increases, thereby slowing down the training. This growing NFEs issue has been observed in unconditional CNF but to a much lesser extent . It poses a unique challenge to scale up CNF and its conditioned variants for real-world tasks and data. Our contributions in this paper are as follows: Contribution 1: We propose a simple and efficient conditioning approach for CNF, namely InfoCNF. Our method shares the high-level intuition with the InfoGAN , thus the eponym. In InfoCNF, we partition the latent code into two separate parts: class-specific supervised code and unsupervised code which is shared between different classes (see Figure 1). We use the supervised code to condition the model on the given supervised signal while the unsupervised code captures other latent variations in the data since it is trained using all label categories. The supervised code is also used for classification, thereby reducing the size of the classifier and facilitating the learning. Splitting the latent code into unsupervised and supervised parts allows the model to separate the learning of the task-relevant features and the learning of other features that help fit the data. We later show that the cross-entropy loss used to train InfoCNF corresponds to the mutual information between the generated image and codes in InfoGAN, which encourages the model to learn disentangled representations. We explore the speed-up achievable in InfoCNF by tuning the error tolerances of the ODE solvers in the model. ODE solvers can guarantee that the estimated solution is within a given error tolerance of the true solution. Increasing this tolerance enhances the precision of the solution but in more iterations by the solver, which leads to higher NFEs and longer training time. However, when training a neural network, it might not be necessary to achieve high-precision activation, i.e. the solution of the corresponding ODE, at each layer. Some noise in the activations can help improve the generalization and robustness of the network (; ; ; a). With carefully selected error tolerances, InfoCNF can gain higher speed and better performance. However, the process of manually tuning the tolerances is time-consuming and requires a large amount of computational budget. To overcome this limitation, we propose a new method to learn the error tolerances of the ODE solvers in InfoCNF from batches of input data. This approach employs learnable gating networks such as the convolutional neural networks to compute good error tolerances for the ODE solvers. We study methods to improve the large-batch training of InfoCNF including tuning and learning the error tolerances of the ODE solvers, as well as increasing the learning rates. We conduct experiments on CIFAR10 and show that InfoCNF equipped with gating networks outperforms the baseline Conditional Continuous Normalizing Flow (CCNF) in test error and NFEs in both small-batch and large-batch training. In small-batch training, InfoCNF improves the test error over the baseline by 12%, and reduces the NFEs by 16%. When trained with large batches, InfoCNF attains a reduction of 10% in test error while decreasing the NFEs by 11% compared to CCNF. InfoCNF also achieves a slightly better negative log-likelihood (NLL) score than the baseline in large-batch training, but attains a slightly worse NLL score in small-batch training. In order to better understand the impact of the gating approach to learn the error tolerances, we compare InfoCNF with and without the gating networks. In small-batch training, InfoCNF with gating networks achieves similar classification and density estimation performance as the same model without the gating networks, but reduces the NFEs by more than 21%. When trained with large batches, gating networks help attain a reduction of 5% in test error and a small improvement in NLLs. We also confirm the benefits of our gating approach on unconditional CNF and observe that on CIFAR10 learning the error tolerances helps reduce the NFEs by 15% while preserving the NLL. Furthermore, we explore the potential benefit of the partitioning strategy for time-series data. In our experiments, when the latent code is partitioned in the baseline LatentODE , the model achieves better performance in curve extrapolation. supervised signals (e.g., the class labels), and the model's parameters respectively. The superscript i is used to indicate the layer i. For example, x i is the set of input features into layer i. Invertible flow-based generative models such as RealNVP and Glow have drawn considerable interest recently. These models are composed of bijective transforms whose Jacobian matrices are invertible with tractable determinants. Let f (z; θ) denote such a transform applied on the latent variable z to generate the data x. Because of its bijective structure, the transform f not only allows exact inference of z given x but also enables exact density evaluation via the change of variable formula: The exact inference and exact density evaluation are preserved when stacking the bijective transforms into a deep generative model. The chain formed by the successive probability distributions generated by these transforms is called a normalizing flow, and the ing generative models are called flow-based generative models. The distribution of the latent code z at the top of the model is usually chosen to be a factorized standard Gaussian to simplify the computation, and the parameters θ are learned by maximizing the exact log-likelihood log p(X; θ) where X is the training set containing all training data x. While flow-based generative models enjoy nice properties, the requirements to ensure the invertibility and tractable computation restrict the expressive power of the model. Recently, the Continuous Normalizing Flows (CNFs) have been explored in to bypass the restrictive requirements in the flow-based generative models and allow the models to be expressive for more complicated tasks. CNF defines the invertible transforms via continuous-time dynamics. It models the latent variable z and the data x as values of a continuous-time variable z(t) at time t 0 and t 1, respectively. Given z, CNF solves the initial value problem to find x The change in log-density under this model follows the instantaneous change of variables formula Thus, CNF reduces the O(d 3) cost of computing the determinant to the O(d 2) cost of computing the trace. Taking advantage of the Hutchinson's trace estimator , this computation cost can be reduced to O(d) where d is the dimension of latent code z. Conditional CNF: To the best of our knowledge, there has not been a conditioning method particularly designed for CNF. However, since CNF belongs to the flow-based generative model family, conditioning methods for flow-based models can also be applied on CNF. In particular, the conditioning approach via a Gaussian Mixture Model (GMM) and an auxiliary classifier proposed in has been widely used. We refer to CNF equipped with this type of conditioning as the Conditional Continuous Normalizing Flow (CCNF). We use CCNF as the baseline for comparison in our experiments. In CCNF, the distribution of the latent code z follows a GMM whose means and scales are functions of the conditional signal y and parameterized by simple neural networks. Furthermore, an additional predictive model is applied on z to model the distribution p(y|z) via an auxiliary predictive task Here, q φ (y) and q θ (z) are usually chosen to be neural networks whose parameters are learned during training. While this conditioning method has been shown to enable flow-based models to do conditional image generation and perform predictive tasks such as object classification in , it is in fact rather inefficient. This is because the size of the latent code z in flow-based generative models is the same as the size of the input image and therefore often large. As a , the conditioning network q φ used to synthesize z and the predictive network q θ applied on z introduce a significant amount of additional parameters for the model to learn. InfoCNF: InfoCNF only uses a portion of the latent code z for conditioning. In particular, InfoCNF splits z into two non-overlapping parts -supervised latent code z y and unsupervised latent code z u -such that z = [z y, z u]. The supervised latent code z y captures the salient structured semantic features of the data distribution. In particular, denote the set of structured latent variables which account for those semantic features by y 1, y 2, · · ·, y L. For simplicity and efficient computation, we assume that these latent variables follow a factored distribution such that The supervised code z y is the concatenation of z y 1, z y 2, · · ·, z y L where z y i is the code that captures the latent variable y i. We use z y for conditioning the model. Similar to the conditional Glow, the distribution p(z y) is modeled by a GMM, N (z y |µ(y), σ 2 (y)), whose centers and scales are functions of the conditional signal y. As in Eq., these functions are parameterized by a neural network q φ (y). The posterior p(y|z) is then approximated by another neural network q θ (z y) applied on z y to solve the corresponding predictive task. The unsupervised code z u ∼ N (z u |0, I) can be considered as source of incompressible noise which accounts for other latent variations in the data. We learn InfoCNF by optimizing the supervised loss from q θ (z y) and the conditional log-likelihood log p(x|y) of the model. The learning objective of InfoCNF is given by where L Xent (ŷ, y) is the cross-entropy loss between the estimated labelŷ and the ground truth label y. β is the weighting factor between the cross-entropy loss L Xent (ŷ, y) and the conditional log-likelihood loss L NLL (x|y). L NLL (x|y) is given by where k are indices for layers in the network and log p(z y |y), log p(z u) are calculated from the formula for log-likelihood of a Gaussian distribution. In our notation, we set z K = z and z 0 = x. For each integral of the trace of the Jacobian in Eqn. 2, without generality, we choose t 0 = 0 and t 1 = 1. The mutual information between the generated images and the codes in InfoGAN is approximated by a variational lower bound via an "auxiliary" distribution, which is chosen to be a neural network. Since InfoCNF is an invertible model, the generated images from the model given the codes matches the input images. Thus, maximizing the mutual information between the generated images and the codes is equivalent to maximizing the cross-entropy loss between the estimated label and the ground truth label, which is the loss L Xent (ŷ, y) in Eqn. 4. Thanks to the invertibility of InfoCNF, we can eliminate the need of using an additional "auxiliary" network. Compared to CCNF, InfoCNF needs slightly fewer parameters since the size of the supervised code z y is smaller than the size of z. For example, in our experiments, z y is only half the size of z, and InfoCNF requires 4% less parameters than CCNF. This removal of unnecessary parameters helps facilitate the learning. As discussed in Section 4.3, our experiments on CIFAR10 suggest that InfoCNF requires significantly less NFEs from the ODE solvers than CCNF. This evidence indicates that the partition strategy in InfoCNF indeed helps alleviate the difficulty during the training and improves the learning of the model. Tuning the error tolerances: We explore the possibility of improving InfoCNF by tuning the error tolerances of the ODE solvers in the model. The advantage of this approach is two-fold. First, it reduces the number of function evaluations (NFEs) by the ODE solvers and, therefore, speeds up the training. Second, we hypothesize that the numerical errors from the solvers perturb the features and gradients, which provides additional regularization that helps improve the training of the model. We extend our approach of tuning the error tolerances of the ODE solvers by allowing the model to learn those tolerances from the data. We propose InfoCNF with learned tolerances, which associates each ODE solver in InfoCNF with a gating network that computes the error tolerance of the solver such that the model achieves the best accuracy and negative log-likelihood with the minimal NFEs. These gates are learnable functions that map input data or features into the tolerance values. In our experiments, we use CNNs for the gates (see Figure 1). The error tolerance decides how many iterations the solvers need to find the solution. This process is discrete and non-differentiable, which creates a unique challenge for training InfoCNF with learned tolerances. We exploit the reinforcement learning approach to solve this non-differentiable optimization and learn the parameters of the gating networks. In particular, at each gate, we formulate the task of learning the gating network as a policy optimization problem through reinforcement learning to find the optimal error tolerances. In InfoCNF, we assume that the error tolerances can be modeled by a Gaussian distribution. The sample sequence of the error tolerances drawn from our policy starting with input x is defined as is the sequence of network layers with parameters θ and are the outputs of the gating network at layer i. Here, the policy is to decide which error tolerance to use and is defined as a function from input x i to the probability distribution over the values of the error tolerances, π(. The CNN in the gating network is used to estimate the parameters of the probability distribution over the values of the error tolerances. We choose the rewards function R i = −NFE i, the negative of the number of function evaluations by the solver at layer i, so that our policy tries to fit the model well and do good classification while requiring less computation. The overall objective is given as: where the α balance between minimizing the prediction/NLL loss and maximizing the rewards. Employing REINFORCE , we can derive the gradients The first part of Eq. 7 is gradients of the cross-entropy and NLL loss while the second part corresponds to the REINFORCE gradients in which r i is the cumulative future rewards given the error tolerance estimated by the gating networks. This combination of supervised, unsupervised, and reinforcement learning encourages the model to achieve good performance on classification and density estimation while demanding less number of function evaluations by the ODE solvers. In this section, we empirically demonstrate the advantage of InfoCNF over the baseline CCNF when trained on CIFAR10. Throughout the experiments, we equip InfoCNF with the gating networks to learn the error tolerances unless otherwise stated. Compared to CCNF, InfoCNF achieves significantly better test errors, smaller NFEs, and better (in large-batch training) or only slightly worse (in smallbatch training) NLL. Furthermore, we observe that learning the error tolerances of the solvers helps improve InfoCNF in all criteria except for a slightly worse NLL in small-batch training and a similar NFEs in large-batch training. We also describe how we evaluate our model to make sure that reported are not biased by the numerical error from the ODE solvers in section 4.2. Dataset: We validate the advantages of our models on the CIFAR10 dataset. Uniform noise is added to the images, and during training, the data are randomly flipped horizontally. We use the FFJORD multiscale architecture composed of multiple flows as in for our experiments. The network details can be found in Appendix B. When conditioning the model, we use separate 1-layer linear networks for the classifier q θ and the condition encoder q φ. The parameters of the networks are initialized to zeros. In InfoCNF, we use half of the latent code for conditioning. We apply a dropout of rate 0.5 on the linear classifier. Training: We train both CCNF and InfoCNF with the Adam optimizer . When using the batch size of 900, we train for 400 epochs with learning rate of 0.001 which was decayed to 0.0001 after 250 epochs. When using the batch size of 8,000, we train for 400 epochs with learning rate of 0.01 which was decayed to 0.001 at epoch 120. The adaptive ODE solvers perform numerical integration and therefore have errors inherent in their outputs. When evaluating the models, we need to take these numerical errors into account. Since our method of learning the error tolerances is purely for training, a reasonable evaluation is to set the error tolerances to a small value at test time and report the on the test set. In order to find which small value of the error tolerance to use for evaluation, we train InfoCNF on 1-D synthetic data. Since a valid probability density function needs to integrate to one, we take InfoCNF trained on the synthetic 1-D data sampled from a mixture of three Gaussians and compute the area under the curve using Riemann sum at different error tolerance values starting from the machine precision of 10 −8 (see Appendix C for more details). Figure 2a shows that the numerical errors from InfoCNF is negligible when the tolerance is less than or equal to 10 −5. Thus, in our experiments, we set tolerances to 10 −5 at test time. In other to validate that a tolerance of 10 −5 still yields negligible numerical errors on complex datasets like CIFAR10, we also evaluate our trained models using tolerances of 10 −6, 10 −7, and 10 −8. We observe that when using those values for tolerances, our trained models yield the same test errors and NLLs as when using tolerance of 10 −5. The NFEs of the ODE solvers reported in our experiments are computed by averaging the NFEs over training epochs. Figure 2b shows the learned error tolerances from InfoCNF trained on CIFAR10 using small batches of size 900. We validate that the learned tolerances from the train and test sets are similar and thus the learned tolerances from our model do not only overfit the training data. Evaluation using the learned error tolerances with different batch sizes is discussed in Appendix D. Solver Tolerance Layer Solver Tolerance Figure 3a shows that compared to CCNF, InfoCNF improves the test classification error on CIFAR10 by 12% when training with small batches. Interestingly, the advantage of InfoCNF over CCNF still holds when we train the models with large batch size of 8k (10% improvement in test classification error, see Figure 3d). While InfoCNF facilitates classification, it also attains similar NLLs compared to CCNF in small-batch training and better NLLs in large-batch training (Figure 3b, e). Figure 3c and f show the evolution of NFEs during training of InfoCNF and CCNF. In small-batch experiments, InfoCNF is much more efficient than CCNF in the first 240 epochs but then the NFEs of InfoCNF increases and exceeds the NFEs of CCNF. Overall, the NFEs from InfoCNF during training is 16% less than the NFEs from CCNF. When training with large batches, InfoCNF requires 11% less NFEs than CCNF. We study the benefits of using gating networks to learn the error tolerances by comparing InfoCNF with learned error tolerances and with error tolerances set to 10 −5. When training both models with large batches, InfoCNF with learned tolerances achieves the test error of 4% less than the test error attained by the same model with fixed error tolerances. InfoCNF with learned tolerances also yields slightly better NLLs (3.64 vs. 3.67). In small-batch training, both models achieve similar test errors and NLLs. Unlike with the test error and likelihood, the InfoCNF with learned tolerances significantly reduces the NFEs compared to InfoCNF with fixed error tolerances when both are trained with small batches-a reduction of 21%. In large-batch training, the NFEs from both models are similar. In summary, when InfoCNF with learned tolerances has the advantage over InfoCNF with tolerances set to 10 −5, it is a notable improvement. Otherwise, InfoCNF with learned tolerances is as good as or only slightly worse than the its counterpart. We summarize the comparison between InfoCNF and CCNF in Table 1. Automatic Tuning vs. Manual Tuning: In Figure 3d, e, and f, InfoCNF is trained with the manuallytuned ODE solvers' tolerances since otherwise the models are hard to train. Thus, for large-batch training, we are comparing the learned tolerances in InfoCNF with the manually tuned tolerances. Furthermore, Figure 10 in Appendix F shows similar comparison for small-batch training. In both experiments, we observe that our automatic approach learns the tolerances which outperform the manually-tuned ones in both classification and density estimation while being only slightly worse in term of NFEs. Also, our automatic approach via reinforcement learning requires much less time and computational budget to find the right values for the tolerances compared to the manual tuning. Unconditional CNF: Inspired by the success of InfoCNF, we explore whether the same reinforcement learning approach still work for unconditional models. We compare the baseline CNF in with CNF which uses the CNN gates to learn the error tolerance of the ODE solvers in small-batch training setting. While both models yield similar NLLs (3.37 bits/dim), CNF with learned tolerances significantly reduces the NFEs by 15% compared to the baseline CNF (see Figure 4). Training neural networks using large batches is much faster than small batches, but suffers from poor generalization . In order to conduct meaningful experiments with large-batch training, we study methods to improve the performance of CCNF and InfoCNF. Our experiments confirm that tuning the error tolerance of the ODE solvers, together with using larger learning rate as suggested in , helps enhance the performance of both InfoCNF and CCNF, ing in better test error, lower NLLs, and smaller NFEs (see Figure 5). We study the potential benefit of applying the conditioning method via partitioning the latent code in InfoCNF on time-series data. In this experiment, we choose the LatentODE as the baseline model and conduct the experiment on the synthetic bi-directional spiral dataset based on the one proposed in. In particular, we first generate a fixed set of 5,000 2-dimensional spirals of length 1,000 as ground truth data: 2,500 curves are clockwise and the other 2,500 curves are counter-clockwise. These spirals have different parameters. We then randomly sample one spiral of 200 equally-spaced time steps from each of these ground truth spirals. We add Gaussian noise of mean 0 and standard deviation 0.3 to these small spirals to form the training set for our experiments. The test set is generated in the similar way. We train the LatentODE with and without our conditioning strategy for trajectory fitting and test the trained models for trajectory fitting and extrapolation on data in test set. More details on the dataset, network architecture, and training details are provided in Appendix E. We observe that the LatentODE equipped with our conditioning strategy outperforms the baseline Latent ODE on trajectory fitting and extrapolation tasks, especially in unseen domains. Conditional generative models: applies a generalized linear model on the latent code of a flow-based generative model to compute both p(x) and p(y|x) exactly. Their method does not consider splitting the latent code and is complimentary to our partitioning approach. The conditioning approach proposed in is close to InfoCNF. However, in contrast to this method, InfoCNF does not penalize the mismatch between the joint distribution of the supervised and the unsupervised code p(z y, z u) and the product of their marginal distributions p(z y)p(z u). InfoCNF also does not minimize the MMD distance between the distribution of the backward prediction against the prior data distribution. Instead, we maximize the likelihood of the input image given the label p(x|y) = p(z y |y)p(z u). uses an encoder to generate the partitioned latent code for Glow. This encoder is trained by optimizing an adversarial loss as in the generative adversarial networks (GANs) so that its generated latent codes are indistinguishable from the latent codes computed by Glow for real data. Our InfoCNF directly splits the latent code without using an extra complicated architecture like GANs, which might introduce more instability into the ODE solvers and slow down the training. Adaptive computation: The use of gating networks and reinforcement learning to learn a policy for adaptive computation has been studied to enhance the efficiency of the neural networks. (b ;c) employ gating networks to decide which blocks to skip in residual networks. develops a reinforcement learning framework to automatically select compression techniques for a given DNN based on the usage demand. Other works including (; ; ;) study methods to select the number of evaluations in recurrent and residual networks. To the best of our knowledge, our InfoCNF with learned tolerances is the first that learns the error tolerances of the ODE solvers in CNF. Large-batch training: Various methods have been proposed to improve the generalization of neural networks trained with large batches by scaling the learning rate and scheduling the batch size. Among them are (; ; ;). These methods are complimentary to our approach of tuning the error tolerances. We have developed an efficient framework, namely InfoCNF, for conditioning CNF via partitioning the latent code into the supervised and unsupervised part. We investigated the possibility of tuning the error tolerances of the ODE solvers to speed up and improve the performance of InfoCNF. We invented InfoCNF with gating networks that learns the error tolerances from the data. We empirically show the advantages of InfoCNF and InfoCNF with learned tolerances over the baseline CCNF. Finally, we study possibility of improving large-batch training of our models using large learning rates and learned error tolerances of the ODE solvers. We validate InfoCNF on the MNIST dataset. On MNIST, InfoCNF with learned error tolerances, InfoCNF with fixed error tolerances, and the baseline CCNF achieve similar NLLs and test errors. However, the InfoCNF with learned and fixed error tolerances are 1.5x and 1.04x faster than the baseline CCNF, respectively (416 NFEs/epoch vs. 589 NFEs/epoch vs. 611 NFEs/epochs). We include the detailed in Table 2. All experiments are conducted with batch size 900, and the are averaged over 3 runs. Below is the MNIST images generated by InfoCNFwith learned error tolerances. For experiments on CIFAR10, we use the FFJORD multiscale architecture composed of multiple flows as in for our experiments. The network uses 4 scale blocks, each of which contain 2 flows, a "squeeze" operator, and then other 2 flows. Each flow is made of 3 convolutional layers with 64 filters whose kernel size is 3. The squeeze operators are applied between flows to down-sample the spatial resolution of the images while increasing the number of channels. We apply the softplus nonlinearity at each layer. This architecture tries to reduce the dimensionality of the latent representation at each level while preserving invertibility. It was based on the multiscale architecture in , which has been widely used as the base architecture for invertible models. Also, we parameterize q θ and q φ by separate linear networks. C SYNTHETIC 1-D USED TO ESTIMATE THE VALUE OF ERROR TOLERANCES OF THE ODE SOLVERS AT TEST TIME As mentioned in Section 4.2, in order to estimate the value of error tolerances of the ODE solvers which yields negligible numerical errors during test time, we train InfoCNF on a 1-D synthetic dataset and calculate the area under the curve at different error tolerance values starting from the machine precision of 10 −8. Examples in this dataset are sampled from the mixture of Gaussians shown by the blue curve in Figure 8. The orange curve in Figure 8 represents the distribution learned by our InfoCNF when the error tolerances are set to 10 −5. Groundtruth distribution Learned distribution Figure 8: Distribution from which we sample the 1-D synthetic data for computing the value of error tolerances used to evaluate the trained InfoCNF (blue curve) and the distribution learned by InfoCNF (orange curve) We explore if the error tolerances computed from batches of input data can be used to evaluate the trained InfoCNF. First, we repeat the experiment on 1-D synthetic data described in Section 4.2 and Appendix C above using the learned error tolerances instead of the fixed error tolerances. We observe that the numerical error in this case is 0.00014, which is small enough. We further use the learned error tolerances to evaluate our trained InfoCNF on CIFAR10 with small-batches. Distribution of these error tolerances at different layers can be found in Figure 2b. The test error and NLL we obtain are 20.85 ± 1.48 and 3.566 ± 0.003, which are close enough to the obtained when setting the error tolerances to 10 −5 (test error and NLL in this case are 20.99 ± 0.67 and 3.568 ± 0.003, respectively, as shown in Table 1). Furthermore, when using the learned error tolerances to evaluate the trained model, we observe that the trained InfoCNF achieves similar classification errors, negative log-likelihoods (NLLs), and number of function evaluations (NFEs) with various small values for the batch size (e.g. 1, 500, 900, 1000, 2000). However, when we use large batch sizes for evaluation (e.g. 4000, 6000, 8000), those metrics get worse. This sensitivity to test batch size is because the error tolerances in InfoCNF are computed for each batch of input data. E DATASET, NETWORK ARCHITECTURES, AND TRAINING DETAILS FOR EXPERIMENTS ON SYNTHETIC TIME-SERIES DATA E.1 DATASET In the experiments on time-series data in Section 4.6, we use the synthetic bi-directional spiral dataset based on the one proposed in. In particular, we first generate a fixed set of 5,000 2-dimensional spirals of length 1,000 as ground truth data: 2,500 curves are clockwise and the other 2,500 curves are counter-clockwise. These spirals have different parameters. The equations of the ground truth spirals are given below: Counter-Clockwise: R = a + b × t; x = R × cos(t) + 5; y = R × sin(t). Here a and b serve as the system parameters and are sampled from the Gaussian distributions N (1.0, 0.08) and N (0.25, 0.03), respectively. We then randomly sample one spiral of 200 equally-spaced time steps from each of these ground truth spirals. We add Gaussian noise of mean 0 and standard deviation 0.3 to these small spirals to form the training set for our experiments. The test set is generated in the similar way. In our experiments, the baseline is the LatentODE model in. The RNN encoder that estimates the posterior q φ (z t 0 |x t 0, x t 1, · · ·, x t N) from the observations is fully connected and has 25 hidden units. The latent state z t i are 5-dimensional vectors. Different from the LatentODE, in our model, the first three dimensions of the initial latent state z t 0 are for the supervised code z y and the other two dimensions are for the unsupervised code z u. Like the LatentODE, given the initial latent state z t 0, our model computes the future states z t 1, z t 2, · · ·, z t N by solving the equation The dynamic function f is parameterized with a one-hidden-layer network of 20 hidden units. Also, the decoder that reconstructsx t i from z t i is another one-hidden-layer network of 20 hidden units. Additionally, in our model, the conditioning function q φ and the supervised function q θ are linear networks. In our experiments, the conditioning/supervised signal y are the values of the parameters a, b, and the direction of the spiral (i.e. clockwise or counter-clockwise). We train the model with the Adam optimizer . We use batch training with the learning rate of 0.001. The training is run for 20,000 epochs. We show the test error, NLLs, and the NFEs of InfoCNF with manually-tuned error tolerances for small-batch training on CIFAR10 (the yellow curves) in the Figure 10 in comparison with the from CCNF, InfoCNF, and InfoCNF with learned tolerances in the main text. As can be seen, InfoCNF with learned tolerances, which learns the error tolerances, is still as good as InfoCNF with manually-tuned error tolerances (except for the slightly worse NFEs) when trained with small batches. The NLLs discussed in the main text are the conditional negative log-likelihoods − log p(x|y). We would also like to compare the marginal negative log-likelihoods − log p(x) of CCNF, InfoCNF, and InfoCNF with learned tolerances. Figure 11 shows that like in the case of the conditional NLLs, InfoCNF with learned tolerances in better marginal NLLs in large-batch training but slightly worse NLLs in small-batch training compared to InfoCNF and CCNF. Improving Large-Batch Training of InfoCNF and CCNF: We would like to understand if using large learning rate and tuned error tolerances of the ODE solvers help improve the marginal NLLs in large-batch training. Figure 12 confirms that both InfoCNF and CCNF trained with large learning rate and tuned error tolerances yield much better marginal NLLs in large-batch training compared to the baseline training method which uses small learning rate and constant error tolerances. CCNF without large learning rate and tuned tolerance InfoCNF without large learning rate and tuned tolerance Figure 12: Test marginal NLLs of InfoCNF and CCNF trained on CIFAR10 using large batch size with and without large learning rate and manually-tuned error tolerance. Given promising from training our models with large batches, we would like to compare large-batch with small-batch training in term of speed. Currently, the CNF-based models studied in our paper yield better test classification errors and negative log-likelihoods on CIFAR10 when trained with small batches than with large batches. However, since large-batch training attains smaller NFEs than small-batch training, we would like to explore if the model trained with large batches can reach certain test classification errors and NLLs faster than the same model trained with small batches. In our experiments, we choose to study InfoCNF with learned tolerances since it yields the best test error and NLLs in large-batch training while requiring small NFEs. We compare InfoCNF with learned tolerances trained with large batches and with small batches. We also compare with the baseline CCNF trained with small batches. Figure 13 shows that InfoCNF with learned tolerances trained with large batches achieves better test error than CCNF trained with small batches while converging faster. However, it still lags behind CCNF trained with small batches in term of NLLs and InfoCNF with learned tolerances trained with small batches in both test error and NLLs. This suggests future work for improving large-batch training with the CNF-based models. Since InfoCNF with learned tolerances reduces the NFEs while improving the test error and NLL, we would like to explore the possibility of using InfoCNF with learned tolerances for training larger models to gain better performance in classification and density estimation. We train a InfoCNF with learned tolerances with 4 flows per scale block (compared to 2 flows per scale block as in the original model) on CIFAR10 using large batches and compare the with the original InfoCNF with learned tolerances and the baseline CCNF. Here we follow the same notation as in to describe the models. We call the large InfoCNF with learned tolerances with 4 flows per scale block the 2x-InfoCNF with learned tolerances. Figure 14 shows that the 2x-InfoCNF with learned tolerances yields better test errors and NLLs while increasing the NFEs compared to both InfoCNF and CCNF.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJgvl6EFwH
We propose the InfoCNF, an efficient conditional CNF that employs gating networks to learn the error tolerances of the ODE solvers
A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data. Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics. Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data. To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks. Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks. Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods. Unsupervised learning is a fundamental, unsolved problem and has seen promising in domains such as image recognition and natural language understanding BID19. A central use case of unsupervised learning methods is enabling better or more efficient learning of downstream tasks by training on top of unsupervised representations BID23 BID7 or fine-tuning a learned model BID13. However, since the downstream objective requires access to supervision, the objectives used for unsupervised learning are only a rough proxy for downstream performance. If a central goal of unsupervised learning is to learn useful representations, can we derive an unsupervised learning objective that explicitly takes into account how the representation will be used?The use of unsupervised representations for downstream tasks is closely related to the objective of meta-learning techniques: finding a learning procedure that is more efficient and effective than learning from scratch. However, unlike unsupervised learning methods, meta-learning methods require large, labeled datasets and hand-specified task distributions. These dependencies are major obstacles to widespread use of these methods for few-shot classification. To begin addressing these problems, we propose an unsupervised meta-learning method: one which aims to learn a learning procedure, without supervision, that is useful for solving a wide range of new, human-specified tasks. With only raw, unlabeled observations, our model's goal is to learn a useful prior such that, after meta-training, when presented with a modestly-sized dataset for a human-specified task, the model can transfer its prior experience to efficiently learn to perform the new task. If we can build such an algorithm, we can enable few-shot learning of new tasks without needing any labeled data nor any pre-defined tasks. To perform unsupervised meta-learning, we need to automatically construct tasks from unlabeled data. We study several options for how this can be done. We find that a good task distribution should be diverse, but also not too difficult: naïve random approaches for task generation produce tasks that contain insufficient regularity to enable useful meta-learning. To that end, our method proposes tasks by first leveraging prior unsupervised learning algorithms to learn an embedding of the input data, and then performing an overcomplete partitioning of the dataset to construct numerous categorizations of the data. We show how we can derive classification tasks from these categorizations for use with meta-learning algorithms. Surprisingly, even with simple mechanisms for partitioning the embedding space, such as k-means clustering, we find that meta-learning acquires priors that, when used to learn new, human-designed tasks, learn those tasks more effectively than methods that directly learn on the embedding. That is, the learning algorithm acquired through unsupervised meta-learning achieves better downstream performance than the original representation used to derive meta-training tasks, without introducing any additional assumptions or supervision. See Figure 1 for an illustration of the complete approach. The core idea in this paper is that we can leverage unsupervised embeddings to propose tasks for a meta-learning algorithm, leading to an unsupervised meta-learning algorithm that is particularly effective as pre-training for human-specified downstream tasks. In the following sections, we formalize our problem assumptions and goal, which match those of unsupervised learning, and discuss several options for automatically deriving tasks from embeddings. We instantiate our method with two meta-learning algorithms and compare to prior state-of-the-art unsupervised learning methods. Across four image datasets (MNIST, Omniglot, miniImageNet, and CelebA), we find that our method consistently leads to effective downstream learning of a variety of human-specified tasks, including character recognition tasks, object classification tasks, and facial attribute discrimination tasks, without requiring any labels or hand-designed tasks during meta-learning and where key hyperparameters of our method are held constant across all domains. We show that, even though our unsupervised meta-learning algorithm trains for one-shot generalization, one instantiation of our approach performs well not only on few-shot learning, but also when learning downstream tasks with up to 50 training examples per class. In fact, some of our begin to approach the performance of fully-supervised meta-learning techniques trained with fully-specified task distributions....,,. Figure 1: Illustration of the proposed unsupervised meta-learning procedure. Embeddings of raw observations are clustered with k-means to construct partitions, which give rise to classification tasks. Each task involves distinguishing between examples from N = 2 clusters, with Km-tr = 1 example from each cluster being a training input. The meta-learner's aim is to produce a learning procedure that successfully solves these tasks. In this section, we describe our problem setting in relation to that of unsupervised and semisupervised learning, provide necessary preliminaries, and present our approach. Our goal is to leverage unlabeled data for the efficient learning of a range of human-specified downstream tasks. We only assume access to an unlabeled dataset D = {x i} during meta-training. After learning from the unlabeled data, which we will refer to as unsupervised meta-training, we want to apply what was learned towards learning a variety of downstream, human-specified tasks from a modest amount of labeled data, potentially as few as a single example per class. These downstream tasks may, in general, have different underlying classes or attributes (in contrast to typical semi-supervised problem assumptions), but are assumed to have inputs from the same distribution as the one from which datapoints in D are drawn. Concretely, we assume that downstream tasks are M -way classification tasks, and that the goal is to learn an accurate classifier using K labeled datapoints (x k, y k) from each of the M classes, where K is relatively small (i.e. between 1 and 50).The unsupervised meta-training phase aligns with the unsupervised learning problem in that it involves no access to information about the downstream tasks, other than the fact that they are M -way classification tasks, for variable M upper-bounded by N. The upper bound N is assumed to be known during unsupervised meta-training, but otherwise, the values of M and K are not known a priori. As a , the unsupervised meta-training phase needs to acquire a sufficiently general prior for applicability to a range of classification tasks with variable quantities of data and classes. This problem definition is our prototype for a practical use-case in which a user would like to train an application-specific image classifier, but does not have an abundance of labeled data. Unsupervised embedding learning. An unsupervised embedding learning algorithm E is a procedure that takes as input an unlabeled dataset D = {x i} and outputs a mapping from {x i} to embeddings {z i}. These embedded points are typically lower-dimensional and arranged such that distances correspond to meaningful differences between inputs, in contrast to distances between the original inputs, such as image pixels, which are not meaningful measures of image similarity. Task. An M -way K-shot classification task T consists of K training datapoints and labels {(x k, k)} per class, which are used for learning a classifier, and Q query datapoints and labels per class, on which the learned classifier is evaluated. That is, in a task there are K + Q = R datapoints and labels for each of the M classes. Meta-learning. A supervised meta-learning algorithm M(·) takes as input a set of supervised metatraining tasks {T t}. It produces a learning procedure F(·), which, in turn, ingests the supervised training data of a task to produce a classifier f (·). The goal of M is to learn F such that, when faced with a meta-test time task T t held-out from {T t}, F can learn a f t that accomplishes T t. At a high level, the quintessential meta-learning strategy is to have M iterate over {T t}, cycling between applying the current form of F t on training data from T t to learn f t, assessing its performance by calculating some meta-loss L on held-out data from the task, and optimizing L to improve the learning procedure. We build upon two meta-learning algorithms: model agnostic meta-learning (MAML) BID15 and prototypical networks (ProtoNets) BID29. MAML aims to learn the initial parameters of a deep network such that one or a few gradient steps leads to effective generalization; it specifies F as gradient descent starting from the meta-learned parameters. ProtoNets aim to metalearn a representation in which a class is effectively identified by its prototype, defined to be the mean of the class' training examples in the meta-learned space; F is the computation of these class prototypes, and f is a linear classifier that predicts the class whose prototype is closest in Euclidean distance to the query's representation. Task generation for meta-learning. We briefly summarize how tasks are typically generated from labeled datasets {(x i, y i)} for supervised meta-learning, as introduced by BID26. For simplicity, consider the case where the labels are discrete scalar values y i. To construct an N -way classification task T (assuming N is not greater than the number of unique y i), we can sample N classes, sample R datapoints {x r} n for each of the N classes, and sample a permutation of N distinct one-hot vectors (n) to serve as task-specific labels of the N sampled classes. The task is then defined as T = {(x n,r, n) | x n,r ∈ {x r} n }. Of course, this procedure is only possible with labeled data; in the next section, we discuss how we can construct tasks without ground-truth labels. We approach our problem from a meta-learning perspective, framing the problem as the acquisition, from unlabeled data, of an efficient learning procedure that is transferable to human-designed tasks. In particular, we aim to construct classification tasks from the unlabeled data and then learn how to efficiently learn these tasks. If such tasks are adequately diverse and structured, then metalearning these tasks should enable fast learning of new, human-provided tasks. A key question, then, is how to automatically construct such tasks from unlabeled data D = {x i}. Notice that in the supervised meta-learning task generation procedure detailed in Section 2.2, the labels y i induce a partition P = {C c} over {x i} by assigning all datapoints with label y c to subset C c. Once a partition is obtained, task generation is simple; we can reduce the problem of constructing tasks to that of constructing a partition over {x i}. All that's left is to find a principled alternative to human labels for defining the partition. A naïve approach is to randomly partition the data D. While such a scheme introduces diverse tasks, there is no structure; that is, there is no consistency between a task's training data and query data, and hence nothing to be learned during each task, let alone across tasks. As seen in TAB3, providing a meta-learner with purely random tasks in failed meta-learning. To construct tasks with structure that resembles that of human-specified labels, we need to group datapoints into consistent and distinct subsets based on salient features. With this motivation in mind, we propose to use k-means clustering. Consider the partition P = {C c} learned by k-means as a simplification of a Gaussian mixture model p(x|c)p(c). If the clusters can recover a semblance of the true class-conditional generative distributions p(x|c), creating tasks based on treating these clusters as classes should in useful unsupervised meta-training. However, the of k-means is critically dependent on the metric space on which its objective is defined. Clustering in pixel-space is unappealing for two reasons: distance in pixel-space correlates poorly with semantic meaning, and the high dimensionality of raw images renders clustering difficult in practice. We empirically show in TAB3 that meta-learning with tasks defined by pixel-space clusters, with preprocessing as directed by BID8, also fails. We are now motivated to cluster in spaces in which common distance functions correlate to semantic meaning. However, we must satisfy the constraints of our problem statement in the process of learning such spaces. To these ends, we use state-of-the-art unsupervised learning methods to produce useful embedding spaces. For qualitative evidence in the unsupervised learning literature that such embedding spaces exhibit semantic meaning, see BID7; BID4; BID11. We note that while a given embedding space may not be directly suitable for highly-efficient learning of new tasks (which would require the embedding space to be precisely aligned or adaptable to the classes of those tasks), we can still leverage it for the construction of structured tasks, a process for which requirements are less strict. Thus, we first run an out-of-the-box unsupervised embedding learning algorithm E on D, then map the data {x i} into the embedding space Z, producing {z i}. To produce a diverse task set, we generate P partitions {P p} by running clustering P times, applying random scaling to the dimensions of Z to induce a different metric, represented by diagonal matrix A, for each run of clustering. With µ c denoting the learned centroid of cluster C c, a single run of clustering can be summarized with DISPLAYFORM0 We derive tasks for meta-learning from the partitions using the procedure detailed in Section 2.2, except we begin the construction of each task by sampling a partition from the uniform distribution U(P), and for x i ∈ C c, specify y i = c. To avoid imbalanced clusters dominating the meta-training tasks, we opt not to sample from p(c) ∝ |C c |, but instead sample N clusters uniformly without replacement for each task. We note that BID5 are similarly motivated in their design decision of sampling data from a uniform distribution over clusters. With the partitions being constructed over {z i}, we have one more design decision to make: should we perform meta-learning on embeddings or images? We consider that, to successfully solve new tasks at meta-test time, a learning procedure F that takes embeddings as input would depend on the embedding function's ability to generalize to out-of-distribution observations. On the other hand, by meta-learning on images, F can separately adapt f to each evaluation task from the rawest level of representation. Thus, we choose to meta-learn on images. We call our method clustering to automatically construct tasks for unsupervised meta-learning (CACTUs). We detail the task construction algorithm in Algorithm 1, and provide an illustration of the complete unsupervised meta-learning approach for classification in Figure 1. The method we propose aims to address the unsupervised learning problem , namely acquiring a transferable learning procedure without labels. We show that our Algorithm 1 CACTUs for classification 1: procedure CACTUS(E, D, P, k, T, N, Km-tr, Q) 2:Run embedding learning algorithm E on D and produce embeddings {zi} from observations {xi}. Run k-means on {zi} P times (with random scaling) to generate a set of partitions {Pp = {Cc}p}. for t from 1 to the number of desired tasks T do 5:Sample a partition P uniformly at random from the set of partitions {Pp}. Sample a cluster Cn uniformly without replacement from P for each of the N classes desired for a task. Sample an embedding zr without replacement from Cn for each of the R = Km-tr + Q training and query examples desired for each class, and record the corresponding datapoint xn,r. Sample a permutation (n) of N one-hot labels. 9:Construct Tt = {(xn,r, n)}. return {Tt} method is complementary to a number of unsupervised learning methods, including ACAI BID3, BiGAN BID11 BID12, DeepCluster BID5, and InfoGAN: we leverage these prior methods to learn embeddings used for constructing meta-learning tasks, and demonstrate that our method learns a more useful representation than the embeddings. The ability to use what was learned during unsupervised pretraining to better or more efficiently learn a variety of downstream tasks is arguably one of the most practical applications of unsupervised learning methods, which has a long history in neural network training (; BID1 BID20 BID31 BID13 . Unsupervised pre-training has demonstrated success in a number of domains, including speech recognition BID34, image classification , machine translation BID19, and text classification BID9; BID18. Our approach, unsupervised meta-learning, can be viewed as an unsupervised learning algorithm that explicitly optimizes for few-shot transferability. As a , we can expect it to better learn human-specified downstream tasks, compared to unsupervised learning methods that optimize for other metrics, such as reconstruction ), fidelity of constructed images BID25 BID11 BID12, representation interpolation BID3, disentanglement BID2 BID23 BID7; BID10, and clustering BID8 Krähenbühl et al., 2016; BID4 BID5. We empirically evaluate this hypothesis in the next section. In contrast to many previous evaluations of unsupervised pre-training, we focus on settings in which only a small amount of data for the downstream tasks is available, since this is where the unlabeled data can be maximally useful. Unsupervised pre-training followed by supervised learning can be viewed as a special case of the semi-supervised learning problem (; ; BID21). However, in contrast to our problem statement, semi-supervised learning methods assume that a significant proportion of the unlabeled data, if not all of it, shares underlying labels with the labeled data. Additionally, our approach and other unsupervised learning methods are wellsuited for transferring their learned representation to many possible downstream tasks or labelings, whereas semi-supervised learning methods typically optimize for performance on a single task, with respect to a single labeling of the data. Our method builds upon the ideas of meta-learning BID27 BID0 ) and few-shot learning BID26 BID33 BID22; BID29. We apply two meta-learning algorithms, model-agnostic meta-learning BID15 and prototypical networks BID29, to tasks constructed in an unsupervised manner. Similar to our problem setting, some prior works have aimed to learn an unsupervised learning procedure with supervised data . Instead, we consider a problem setting that is entirely unsupervised, aiming to learn efficient learning algorithms using unlabeled datasets. Our problem setting is similar to that considered by , but we develop an approach that is suitable for supervised downstream tasks, rather than reinforcement learning problems, and demonstrate our algorithm on problems with high-dimensional visual observations. We begin the experimental section by presenting our research questions and how our experiments are designed to address them. Links to code for the experiments can be found at https://sites. google.com/view/unsupervised-via-meta.Benefit of meta-learning. Is there any significant benefit to doing meta-learning on tasks derived from embeddings, or is the embedding function already sufficient for downstream supervised learning of new tasks? To investigate this, we run MAML and ProtoNets on tasks generated via CACTUs (CACTUs-MAML, CACTUs-ProtoNets). We compare to five alternate algorithms, with four being supervised learning methods on top of the embedding function. i) Embedding k nn -nearest neighbors first infers the embeddings of the downstream task images. For a query test image, it predicts the plurality vote of the labels of the k nn training images that are closest in the embedding space to the query's embedding. ii) Embedding linear classifier also begins by inferring the embeddings of the downstream task images. It then fits a linear classifier using the N K training embeddings and labels, and predicts labels for the query embeddings using the classifier. iii) Embedding multilayer perceptron instead uses a network with one hidden layer of 128 units and tuned dropout BID30. iv) To isolate the effect of meta-learning on images, we also compare to embedding cluster matching, i.e. directly using the meta-training clusters for classification by labeling clusters with a task's training data via plurality vote. If a query datapoint maps to an unlabeled cluster, the closest labeled cluster is used. v) As a baseline, we forgo any unsupervised pre-training and train a model with the MAML architecture from standard random network initialization via gradient descent separately for each evaluation task. Different embedding spaces. Does CACTUs in successful meta-learning for many distinct task-generating embeddings? To investigate this, we run unsupervised meta-learning using four embedding learning algorithms: ACAI BID3, BiGAN BID11, DeepCluster BID5, and InfoGAN. These four approaches collectively cover the following range of objectives and frameworks in the unsupervised learning literature: generative modeling, two-player games, reconstruction, representation interpolation, discriminative clustering, and information maximization. We describe these methods in more detail in Appendix A.Applicability to different tasks. Can unsupervised meta-learning yield a good prior for a variety of task types? In other words, can unsupervised meta-learning yield a good representation for tasks that assess the ability to distinguish between features on different scales, or tasks with various amounts of supervision signal? To investigate this, we evaluate our procedure on tasks assessing recognition of character identity, object identity, and facial attributes. For this purpose we choose to use the existing Omniglot BID26 and miniImageNet BID22 ) datasets and few-shot classification tasks and, inspired by, also construct a new few-shot classification benchmark based on the CelebA dataset and its binary attribute annotations. For miniImageNet, we consider both few-shot downstream tasks and tasks involving larger datasets (up to 50-shot). Specifics on the datasets and human-designed tasks are presented in Appendix B.Oracle. How does the performance of our unsupervised meta-learning method compare to supervised meta-learning with a human-specified, near-optimal task distribution derived from a labeled dataset? To investigate this, we use labeled versions of the meta-training datasets to run MAML and ProtoNets as supervised meta-learning algorithms (Oracle-MAML, Oracle-ProtoNets). To facilitate fair comparison with the unsupervised variants, we control for the relevant hyperparameters. Task construction ablation. How do the alternatives for constructing tasks from the embeddings compare? To investigate this, we run MAML on tasks constructed via clustering (CACTUs-MAML) and MAML on tasks constructed via random hyperplane slices of the embedding space with varying margin (Hyperplanes-MAML). The latter partitioning procedure is detailed in Appendix C. For the experiments where tasks are constructed via clustering, we also investigate the effect of sampling based on a single partition versus multiple partitions. We additionally experiment with tasks based on random assignments of images to "clusters" (Random-MAML) and tasks based on pixel-space clusters (Pixels CACTUs-MAML) with the Omniglot dataset. To investigate the limitations of our method, we also consider an easier version of our problem statement where the data distributions at meta-training and meta-test time perfectly overlap, i.e. the images share a common set of underlying labels (Appendix D). Finally, we present on miniImageNet after unsupervised meta-learning on most of ILSVRC 2012 (Appendix G). As discussed by , keeping proper experimental protocol is particularly important when evaluating unsupervised and semi-supervised learning algorithms. Our foremost concern is to avoid falsely embellishing the capabilities of our approach by overfitting to the specific datasets and task types that we consider. To this end, we adhere to two key principles. We do not perform any architecture engineering: we use architectures from prior work as-is, or lightly adapt them to our needs if necessary. We also keep hyperparameters related to the unsupervised meta-learning stage as constant as possible across all experiments, including the MAML and ProtoNets model architectures. Details on hyperparameters and architectures are presented in Appendix E. We assume knowledge of an upper bound on the number of classes N present in each downstream meta-testing task for each dataset. However, regardless of the number of shots K, we do not assume knowledge of K during unsupervised meta-learning. We use N -way 1-shot tasks during meta-training, but test on larger values of K during meta-testing. We partition each dataset into meta-training, meta-validation, and meta-testing splits. For Omniglot and miniImageNet, these splits contain disjoint sets of classes. For all algorithms, we run unsupervised pre-training on the unlabeled meta-training split and report performance on downstream tasks dictated by the labeled data of the meta-testing split, generated using the procedure from prior work recounted in Section 2.2. For the supervised meta-learning oracles, meta-training tasks are constructed in the same manner but from the dataset's meta-training split. See FIG0 for illustrative examples of embedding-derived clusters and human-designed test tasks. To facilitate analysis on meta-overfitting, we use the labels of the meta-validation split (instead of clustering embeddings) to construct tasks for meta-validation. However, because our aim is to perform meta-learning without supervision, we do not tune hyperparameters on this labeled data. We use a fixed number of meta-training iterations, since there is no suitable criterion for early stopping. When we experiment with the embedding-plus-supervised-learning methods used as fair comparisons to unsupervised meta-learning, we err on the side of providing more supervision and data than technically allowed. Specifically, we separately tune the supervised learning hyperparameters for each dataset and each task difficulty on the labeled version of the meta-validation split. With DeepCluster embeddings, we also use the entire meta-testing split's statistics to perform dimensionality reduction (via PCA) and whitening, which is unfair as this shares information across tasks. Our primary are summarized in TAB1. Task construction ablations are summarized in TAB3.Benefit of meta-learning. CACTUs-MAML consistently yields a learning procedure that in more successful downstream task performance than all other unsupervised methods, including those that learn on top of the embedding that generated meta-training tasks for MAML. We find the same for CACTUs-ProtoNets for 1-shot downstream tasks. However, as noted by BID29, ProtoNets perform best when meta-training shot and meta-testing shot are matched; this characteristic prevents ProtoNets from improving upon ACAI for 20-way 5-shot Omniglot and upon DeepCluster for 50-shot miniImageNet. We attribute the success of CACTUs-based meta-learning over the embedding-based methods to two factors: its practice in distinguishing between many distinct sets of clusters from modest amounts of signal, and the underlying classes of the meta-testing split data being out-of-distribution. In principle, the latter factor is solely responsible for the success over embedding cluster matching, since this algorithm can be viewed as a meta-learner on embeddings that trivially obtains perfect accuracy (via memorization) on the meta-training tasks. The same factor also helps explain why training from standard network initialization is, in general, competitive with directly using the task-generating embedding as a representation. On the other hand, the MNIST TAB8 in Appendix F) suggest that when the meta-training and meta-testing data distributions have perfect overlap and the embedding is well-suited enough that embedding cluster matching can already achieve high performance, CACTUs-MAML yields only a small benefit, as we would expect. Different embedding spaces. CACTUs is effective for a variety of embedding learning methods used for task generation. The performance of unsupervised meta-learning can largely be predicted by the performance of the embedding-based non-meta-learning methods. For example, the ACAI embedding does well with Omniglot, leading to the best unsupervised with ACAI CACTUs-MAML. Likewise, on miniImageNet, the best performing prior embedding (DeepCluster) also corresponds to the best performing unsupervised meta-learner (DeepCluster CACTUs-MAML).Applicability to different tasks. CACTUs-MAML learns an effective prior for a variety of task types. This can be attributed to the application-agnostic task-generation process and the expressive power of MAML. We also observe that, despite all meta-learning models being trained for N -way 1-shot classification of unsupervised tasks, the models work well for a variety of M -way K-shot tasks, where M ≤ N and K ≤ 50. As mentioned previously, the representation that CACTUs-ProtoNets learns is best suited for downstream tasks which match the single shot used for meta-training. Oracle. The penalty for not having ground truth labels to construct near-optimal tasks ranges from substantial to severe, depending on the difficulty of the downstream task. Easier downstream tasks (which have fewer classes and/or more supervision) incur less of a penalty. We conjecture that with such tasks, the difference in the usefulness of the priors matters less since the downstream task-specific evidence has more power to shape the posterior. Task construction ablation. As seen in TAB3, CACTUs-MAML consistently outperforms Hyperplanes-MAML with any margin. We hypothesize that this is due to the issues with zero-margin Hyperplanes-MAML pointed out in Appendix C, and the fact that nonzero-margin Hyperplanes-MAML is able to use less of the meta-training split to generate tasks than CACTUs-MAML is. We find that using multiple partitions for CACTUs-MAML, while beneficial, is not strictly necessary. Using non-zero margin with Hyperplanes-MAML is crucial for miniImageNet, but not for Omniglot. We conjecture that the enforced degree of separation between classes is needed for miniImageNet because of the dataset's high diversity. Meta-learning on random tasks or tasks derived from pixel-space clustering TAB3 ) in a prior that is much less useful than any other considered algorithm, including a random network initialization; evidently, practicing badly is worse than not practicing at all. Note on overfitting. Because of the combinatorially many unsupervised tasks we can create from multiple partitions of the dataset, we do not observe substantial overfitting to the unsupervised metatraining tasks. However, we observe that meta-training performance is sometimes worse than metatest time performance, which is likely due to a portion of the automatically generated tasks being based on nonsensical clusters (for examples, see FIG0). Additionally, we find that, with a few exceptions, using multiple partitions has a regularizing effect on the meta-learner: a diverse task set reduces overfitting to the meta-training tasks and increases the applicability of the learned prior. We demonstrate that meta-learning on tasks produced using simple mechanisms based on embeddings improves upon the utility of these representations in learning downstream, human-specified tasks. We empirically show that this holds across benchmark datasets and tasks in the few-shot classification literature BID26 BID22, task difficulties, and embedding learning methods while fixing key hyperparameters across all experiments. In a sense, CACTUs can be seen as a facilitating interface between an embedding learning method and a meta-learning algorithm. As shown in the , the meta-learner's performance significantly depends on the nature and quality of the task-generating embeddings. We can expect our method to yield better performance as the methods that produce these embedding functions improve, becoming better suited for generating diverse yet distinctive clusterings of the data. However, the gap between unsupervised and supervised meta-learning will likely persist because, with the latter, the meta-training task distribution is human-designed to mimic the expected evaluation task distribution as much as possible. Indeed, to some extent, supervised meta-learning algorithms offload the effort of designing and tuning algorithms onto the effort of designing and tuning task distributions. With its evaluation-agnostic task generation, CACTUs-based meta-learning trades off performance in specific use-cases for broad applicability and the ability to train on unlabeled data. In principle, CACTUs-based meta-learning may outperform supervised meta-learning when the latter is trained on a misaligned task distribution. We leave this investigation to future work. While we have demonstrated that k-means is a broadly useful mechanism for constructing tasks from embeddings, it is unlikely that combinations of k-means clusters in learned embedding spaces are universal approximations of arbitrary class definitions. An important direction for future work is to find examples of datasets and human-designed tasks for which CACTUs-based meta-learning in ineffective downstream learning. This will in better understanding of the practical scope of applicability for our method, and spur further development in automatic task construction mechanisms for unsupervised meta-learning. A potential concern of our experimental evaluation is that MNIST, Omniglot, and miniImageNet exhibit particular structure in the underlying class distribution (i.e., perfectly balanced classes), since they were designed to be supervised learning benchmarks. In more practical applications of machine learning, such structure would likely not exist. Our CelebA indicate that CACTUs is effective even in the case of a dataset without neatly balanced classes or attributes. An interesting direction for future work is to better characterize the performance of CACTUs and other unsupervised pretraining methods with highly-unstructured, unlabeled datasets. Since MAML and ProtoNets produce nothing more than a learned representation, our method can be viewed as deriving, from a previous unsupervised representation, a new representation particularly suited for learning downstream tasks. Beyond visual classification tasks, the notion of using unsupervised pre-training is generally applicable to a wide range of domains, including regression, speech , language , and reinforcement learning BID28. Hence, our unsupervised meta-learning approach has the potential to improve unsupervised representations for a variety of such domains, an exciting avenue for future work. We evaluate four distinct methods from prior work for learning the task-generating embeddings. In adversarially constrained autoencoder interpolation (ACAI), a convolutional autoencoder's pixelwise L 2 loss is regularized with a term encouraging meaningful interpolations in the latent space BID3. Specifically, a critic network takes as input a synthetic image generated from a convex combination of the latents of two dataset samples, and regresses to the mixing factor. The decoder of the autoencoder and the generator for the critic are one and the same. The regularization term is minimized when the autoencoder fools the critic into predicting that the synthetic image is a real sample. The bidirectional GAN (BiGAN) is an instance of a generative-adversarial framework in which the generator produces both synthetic image and embedding from real embedding and image, respectively BID11 BID12. Discrimination is done in joint imageembedding space. The DeepCluster method does discriminative clustering by alternating between clustering the features of a convolutional neural network and using the clusters as labels to optimize the network weights via backpropagating a standard classification loss BID5.The InfoGAN framework conceptually decomposes the generator's input into a latent code and incompressible noise. The structure of the latent code is hand-specified based on knowledge of the dataset. The canonical GAN minimax objective is regularized with a mutual information term between the code and the generated image. In practice, this term is optimized using variational inference, involving the approximation of the posterior with an auxiliary distribution Q(code|image) parameterized by a recognition network. Whereas ACAI explicitly optimizes pixel-wise reconstruction error, BiGAN only encourages the fidelity of generated image and latent samples with respect to their respective prior distributions. While InfoGAN also encourages the fidelity of generated images, it leverages domain-specific knowledge to impose a favorable structure on the embedding space and information-theoretic methods for optimization. DeepCluster departs from the aforementioned methods in that it is not concerned with generation or decoding, and only seeks to learn general-purpose visual features by way of end-to-end discriminative clustering. The Omniglot dataset consists of 1623 characters each with 20 hand-drawn examples. Ignoring the alphabets from which the characters originate, we use 1100, 100, and 423 characters for our meta-training, meta-validation, and meta-testing splits. The miniImageNet dataset consists of 100 classes each with 600 examples. The images are predominantly natural and realistic. We use the same meta-training/meta-validation/meta-testing splits of 64/16/20 classes as proposed by BID22. The CelebA dataset includes 202,599 facial images of celebrities and 40 binary attributes that annotate every image. We follow the prescribed 162,770/19,867/19,962 data split. For Omniglot and miniImageNet, supervised meta-learning tasks and evaluation tasks are constructed exactly as detailed in Section 2.2: for an N -way K-shot task with Q queries per class, we sample N classes from the data split and K + Q datapoints per class, labeling the task's data with a random permutation of N one-hot vectors. For CelebA, we consider binary classification tasks (i.e., 2-way), each defined by 3 attributes and an ordering of 3 Booleans, one for each attribute. Every image in a task-specific class shares all task-specific attributes with each other and none with images in the other class. For example, the task illustrated in FIG0 involves distinguishing between images whose subjects satisfy not Sideburns, Straight Hair, and not Young, and those whose subjects satisfy Sideburns, not Straight Hair, and Young. To keep with the idea of having distinct classes for meta-training and meta-testing, we split the task-defining attributes. For the supervised meta-learning oracle, we construct meta-training tasks from the first 20 attributes (when alphabetically ordered), meta-validation tasks from the next 10, and meta-testing tasks from the last 10. Discarding tasks with too few examples in either class, this in 4287, 391, and 402 task prototypes (but many more possible tasks). We use the same meta-test time tasks to evaluate the unsupervised methods. We only consider assessment with 5-shot tasks because, given that there are multiple attributes other than the task-defining ones, any 1-shot task is likely to be ill-defined. Given a set of embedding points {z i} in a space Z, a simple way of defining a partition P = {C c} on {z i} is to use random hyperplanes to slice Z into subspaces and assign the embeddings that lie in the c-th subspace to subset C c. However, a hyperplane slicing can group together two arbitrarily far embeddings, or separate two arbitrarily close ones; given our assumption that good embedding spaces have a semantically meaningful metric, this creates ill-defined classes. This problem can be partially alleviated by extending the hyperplane boundaries with a non-zero margin, as empirically shown in Section 4.2.We now describe how to generate tasks via random hyperplanes in the embedding space. We first describe a procedure to generate a partition P of the set of embeddings {z i} for constructing metatraining tasks. A given hyperplane slices the embedding space into two, so for an N -way task, we need H = log 2 N hyperplanes to define sufficiently many subsets/classes for a task. To randomly define a hyperplane in d-dimensional embedding space, we sample a normal vector n and a point on the plane z 0, each with d elements. For an embedding point z, the signed point-plane distance is given by n |n| 2 · (z − z 0). Defining H hyperplanes in this manner, we discard embeddings for which the signed point-plane distance to any of the H hyperplanes lies within (−m, m), where m is a desired margin. The H hyperplanes collectively define 2 H subspaces. We assign embedding points in the c-th subspace to subset C c. We define the partition as P = {C c}. We prune subsets that do not have at least R = K m-tr + Q members, and check that the partition has at least N remaining subsets; if not, we reject the partition and restart the procedure. After obtaining partitions {P p}, meta-training tasks can be generated by following Algorithm 1 from Line 4.In terms of practical implementation, we pre-compute 1000 hyperplanes and pruned pairs of subsets of {z i}. We generate partitions by sampling combinations of the hyperplanes and taking intersections of their associated subsets to define the elements of the partition. We determine the number of partitions needed for a given Hyperplanes-MAML run by the number of meta-training tasks desired for the meta-learner: we fix 100 tasks per partition. The MNIST dataset consists of 70,000 hand-drawn examples of the 10 numerical digits. Our split respects the original MNIST 60,000/10,000 training/testing split. We assess on 10-way classification tasks. This setup in examples from all 10 digits being present for both meta-training and meta-testing, making the probem setting essentially equivalent to that of semi-supervised learning sans a fixed permutation of the labels. The MNIST scenario is thus a special case of the problem setting considered in the rest of the paper. For MNIST, we only experiment with MAML as the meta-learning algorithm. For ACAI and InfoGAN we constructed the meta-validation split from the last 5,000 examples of the meta-training split; for BiGAN this figure was 10,000. After training the ACAI model and inferring embeddings, manually assigning labels to 10 clusters by inspection in a classification accuracy of 96.00% on the testing split. As the ACAI authors observe, we found it important to whiten the ACAI embeddings before clustering. The same metric for the InfoGAN embedding (taking an argmax over the categorical dimensions instead of actually running clustering) is 96.83%. Note that these are an upper-bound for embedding cluster matching. To see this, consider the 10-way 1-shot scenario. 1 example sampled from each cluster is insufficient to guarantee the optimal label for that cluster; 1 example sampled from each label is not guaranteed to each end up in the optimal category. Aside from CACTUs-MAML, embedding k nn -nearest neighbors, embedding linear classifier, and embedding direct clustering, we also ran CACTUs-MAML on embeddings instead of raw images, using a simple model with 2 hidden layers with 64 units each and ReLU activation, and all other MAML hyperparameters being the same as in TAB6.Departing from the fixed k = 500 used for all other datasets, we deliberately use k = 10 to better understand the limitations of CACTUs-MAML. The can be seen in TAB8 in Appendix B. In brief, with the better embeddings (ACAI and InfoGAN), there is only little benefit of CACTUs-MAML over embedding cluster matching. Additionally, even in the best cases, CACTUs-MAML falls short of state-of-the-art semi-supervised learning methods. For MNIST and Omniglot we use the same 4-block convolutional architecture as used by BID15 for their Omniglot experiments, but with 32 filters (instead of 64) for each convolutional layer for consistency with the model used for miniImageNet and CelebA, which is the same as what BID15 used for their miniImageNet experiments. When evaluating the meta-learned 20-way Omniglot model with 5-way tasks, we prune the unused output dimensions. The outer optimizer is Adam , and the inner optimizer is SGD. We build on the authors' publicly available codebase found at https://github.com/cbfinn/maml. When using batch normalization to process a task's training or query inputs, we observe that using only 1 query datapoint per class can allow the model to exploit batch statistics, learning a strategy analogous to a process of elimination that causes significant, but spurious, improvement in accuracy. To mitigate this, we fix 5 queries per class for every task's evaluation phase, meta-training or meta-testing. For the three considered datasets we use the same architecture as used by BID29 for their Omniglot and miniImageNet experiments. This is a 4-block convolutional architecture with each block consisting of a convolutional layer with 64 3 × 3 filters, stride 1, and padding 1, followed by BatchNorm, ReLU activation, and 2 × 2 MaxPooling. The ProtoNets embedding is simply the flattened output of the last block. We follow the authors and use the Adam optimizer, but do not use a learning rate scheduler. We build upon the authors' publicly available codebase found at https://github.com/jakesnell/prototypical-networks. For Omniglot, miniImageNet, and CelebA we fix the number of clusters k to be 500. For Omniglot we choose the number of partitions P = 100, but in the interest of keeping runtime manageable, choose P = 50 for miniImageNet and CelebA. ACAI BID3: We run ACAI for MNIST and Omniglot. We pad the images by 2 and use the authors' architecture. We use a 256-dimensional embedding for all datasets. We build upon the authors' publicly available codebase found at https://github.com/ brain-research/acai. We unsuccessfully try running ACAI on 64 × 64 miniImageNet and CelebA. To facilitate this input size, we add one block consisting of two convolutional layers (512 filters each) and one downsampling/upsampling layer to the encoder and decoder. However, because of ACAI's pixel-wise reconstruction loss, for these datasets the ACAI embedding prioritizes information about the few "features" that dominate the reconstruction pixel count, ing in clusters that only corresponded to a limited range of factors, such as color and pose. For curiosity's sake, we tried running meta-learning on tasks derived from these uninteresting clusters anyways, and found that the meta-learner quickly produced a learning procedure that obtained high accuracy on the meta-training tasks. However, this learned prior was not useful for solving downstream tasks. BiGAN BID11: For MNIST, we follow the BiGAN authors and specify a uniform 50-dimensional prior on the unit hypercube for the latent. The BiGAN authors use a 200-dimensional version of the same prior for their ImageNet experiments, so we follow suit for Omniglot, miniImageNet, and CelebA. For MNIST and Omniglot, we use the permutation-invariant architecture (i.e. fully connected layers only) used by the authors for their MNIST ; for miniImageNet and CelebA, we randomly crop to 64 × 64 and use the AlexNet-inspired architecture used by BID11 for their ImageNet . We build upon the authors' publicly available codebase found at https://github.com/jeffdonahue/bigan.DeepCluster BID5: We run DeepCluster for miniImageNet and CelebA, which we respectively randomly crop and resize to 64 × 64. We modify the first layer of the AlexNet architecture used by the authors to accommodate this input size. We follow the authors and use the input to the (linear) output layer as the embedding. These are 4096-dimensional, so we follow the authors and apply PCA to reduce the dimensionality to 256, followed by whitening. We build upon the authors' publicly available codebase found at https://github.com/facebookresearch/ deepcluster. InfoGAN: We only run InfoGAN for MNIST. We follow the InfoGAN authors and specify the product of a 10-way categorical distribution and a 2-dimensional uniform distribution as the latent code. We use the authors' architecture. Given an image, we use the recognition network to obtain its embedding. We build upon the authors' publicly available codebase found at https: //github.com/openai/InfoGAN. This section containsfull experimental for the MNIST, Omniglot, miniImageNet, and CelebA datasets, including consolidated versions of the tables found in the main text. The metric is classification accuracy averaged over 1000 tasks based on human-specified labels of the testing split, with 95% confidence intervals. d: dimensionality of embedding, h: number of hidden units in a fully connected layer, k: number of clusters in a partition, P: number of partitions used during meta-learning, m: margin on boundary-defining hyperplanes. Result used 64 filters per convolutional layer, 3× data augmentation, and folded the validation set into the training set after hyperparameter tuning. 56.26 ± 0.94 % Embedding cluster matching, k = 500 56.20 ± 1.00 % CACTUs-MAML (ours), P = 50, k = 50074.98 ± 1.02 % CACTUs-ProtoNets (ours), P = 50, k = 500 65.58 ± 1.04 % Embedding knn-nearest neighbors 61.47 ± 0.99 % Embedding linear classifier 59.57 ± 0.98 % Embedding MLP with dropout, h = 128 60.65 ± 0.98 % Embedding cluster matching, k = 500 51.51 ± 0.89 % CACTUs-MAML (ours), P = 50, k = 500 73.79 ± 1.01 % CACTUs-ProtoNets (ours), P = 50, k = 500 74.15 ± 1.02 % Supervised meta-learning Oracle-MAML (control) 87.10 ± 0.85 % Oracle-ProtoNets (control) 85.13 ± 0.92 % We investigate unsupervised meta-learning in the context of a larger unsupervised meta-training dataset by using the ILSVRC 2012 dataset's training split BID24, which is a superset of the miniImageNet dataset (including meta-validation and meta-testing data) consisting of 1000 classes and over 1,200,000 images. To facilitate comparison to the previous miniImageNet experiments, for meta-validation and meta-test we use the miniImageNet meta-validation and metatest splits. To avoid task leakage, we hold out all data from these 36 underlying classes from the rest of the data to construct the meta-training split. For CACTUs, we use the best-performing unsupervised learning method from the previous experiments, DeepCluster, to obtain the embeddings. Following BID5, we run DeepCluster using the VGG-16 architecture with a 256-dimensional feature space and 10,000 clusters on the meta-training data until the normalized mutual information between the data-cluster mappings of two consecutive epochs converges. To our knowledge, no prior works have yet been published on using MAML for ImageNet-sized meta-learning. We extend the standard convolutional neural network model class with residual connections , validate hyperparameters with supervised meta-learning, then use it for unsupervised meta-learning without further tuning. See TAB1 for MAML hyperparameters. The training from scratch, embedding k nn -nearest neighbors, and embedding linear classifier algorithms are the same as they were in the previous sets of experiments. For Oracle-MAML, we generated tasks using the ground-truth 964 ImageNet meta-training classes. We also run semi-supervised MAML, with the meta-training tasks consisting of CACTUs-based tasks as well as tasks constructed from the 64 miniImageNet meta-training classes. The unsupervised/supervised task proportion split was fixed according to the ratio of the number of data available to each task proposal method. As before, the meta-learning methods only meta-learned on 1-shot tasks. We find that the vastly increased amount of unlabeled meta-training data (in comparison to miniImageNet) in significant increases for all methods over their counterparts in TAB9 (other than training from scratch, which does not use this data). We find that CACTUs-MAML slightly outperforms embedding linear classifier for the 1-shot test tasks, but that the linear classifier on top of the unsupervised embedding becomes better as the amount of test time supervision increases. Augmenting the unsupervised tasks with (a small number of) supervised tasks during meta-training in slight improvement for the 1-shot test tasks. The lackluster performance of CACTUs-MAML is unsurprising insofar as meta-learning with large task spaces is still an open problem: higher shot Oracle-MAML only marginally stays ahead of the embedding linear classifier, which is not the case in the other, smaller-scale experiments. We expect that using a larger architecture in conjunction with MAML (such as) would in increased performance for all methods based on MAML. Further, given the extensive degree to which unsupervised learning methods have been studied, we suspect that unsupervised task construction coupled with better meta-learning algorithms and architectures will in improved performance on the entire unsupervised learning problem. We leave such investigation to future work.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1My6sR9tX
An unsupervised learning method that uses meta-learning to enable efficient learning of downstream image classification tasks, outperforming state-of-the-art methods.
Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. However, most successful applications to date require the two domains to be closely related (ex. image-to-image, video-video), utilizing similar or shared networks to transform domain specific properties like texture, coloring, and line shapes. Here, we demonstrate that it is possible to transfer across modalities (ex. image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (ex. variational autoencoder and a generative adversarial network). We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations. Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions. Domain transfer has long captured the imagination of inventors and artists alike. The early precursor of the phonograph, the phonautograph, was actually inspired by the idea of "words which write themselves", where the shape of audio waveforms would transform into the shape of writing, capturing the content and character of the speaker's voice in shape and stroke of the written characters BID9. While perhaps fanciful at the time, modern deep learning techniques have shown similar complex transformations are indeed possible. Deep learning enables domain transfer by learning a smooth mapping between two domains such that the variations in one domain are reflected in the other. This has been demonstrated to great effect within a data modality, for example transferring between two different styles of image BID12 BID18, video BID26 and music BID23. The works have been the basis of interesting creative tools, as small intuitive changes in the source domain are reflected by small intuitive changes in the target domain. Furthermore, the strong conditioning signal of the source domain makes learning transformations easier than learning a full generative model in each domain. Despite these successes, this line of work in domain transfer has several limitations. The first limitation is that it requires that two domains should be closely related (e.g. image-to-image or videoto-video). This allows the model to focus on transferring local properties like texture and coloring instead of high-level semantics. For example, directly applying these image-to-image transfer such as CycleGAN or its variants to images from distant domains leads to distorted and unrealistic . This agrees with the findings of BID3 who show that CycleGAN transformations are more akin to adversarial examples than style transfer, as the model Our method aims at transfer from one domain to another domain such that the correct semantics (e.g., label) is maintained across domains and local changes in the source domain should be reflected in the target domain. To achieve this, we train a model to transfer between the latent spaces of pre-trained generative models on source and target domains. (a) The training is done with three types of loss functions: The VAE ELBO losses to encourage modeling of z 1 and z 2, which are denoted as L2 and KL in the figure. The Sliced Wasserstein Distance loss to encourage cross-domain overlapping in the shared latent space, which is denoted as SWD. The classification loss to encourage intra-class overlap in the shared latent space, which is denoted as Classifier. The training is semi-supervised, since and requires no supervision (classes) while only needs such information. (b) To transfer data from one domain x 1 (an image of digit "0") to another domain x 2 (an audio of human saying "zero", shown in form of spectrum in the example), we first encode x 1 to z 1 ∼ q(z 1 |x 1), which we then further encode to a shared latent vector z using our conditional encoder, z ∼ q(z |z 1, D = 1), where D donates the operating domain. We then decode to the latent space of the target domain z 2 = g(z|z, D = 2) using our conditional decoder, which finally is used to generate the transferred audio x 2 = g(x 2 |z 2).learns to hide information about the source domain in near-imperceptible high-frequency variations of the target domain. The second limitation is data efficiency. Most conditional GAN techniques, such as Pix2Pix BID12 and vid2vid BID26, require very dense supervision from large volumes of paired data. This is usually accomplished by extracting features, such as edges or a segmentation map, and then training the conditional GAN to learn the inverse mapping back to pixels. For many more interesting transformations, no such easy alignment procedure exists, and paired data is scarce. We demonstrate the limitation of existing approaches in Appendix C.For multi-modal domain transfer, we seek to train a model capable of transferring instances from a source domain (x 1) to a target domain (x 2), such that local variations in source domain are transferred to local variations in the target domain. We refer to this property as locality. Thus, local interpolation in the source domain would ideally be similar to local interpolation in target domain when transferred. There are many possible ways that two domains could align such that they maintain locality, with many different alignments of semantic attributes. For instance, for a limited dataset, there is no a priori reason that images of the digit "0" and spoken utterances of the digit "0" would align with each other. Or more abstractly, there may be no agreed common semantics for images of landscapes and passages of music, and it is at the liberty of the user to define such connections based on their own intent. Our goal in modeling is to respect the user's intent and make sure that the correct semantics (e.g., labels) are shared between the two domains after transfer. We refer to this property as semantic alignment. A user can thus sort a set of data points from in each domain into common bins, which we can use to constrain the cross-domain alignment. We can quantitatively measure the degree of semantic alignment by using a classifier to label transformed data and measuring the percentage of data points that fall into the same bin for the source and target domain. Our goal can thus be stated as learning transformations that preserve locality and semantic alignment, while requiring as few labels from a user as possible. To achieve this goal and tackle prior limitations, we propose to abstract the domain domains with independent latent variable models, and then learn to transfer between the latent spaces of those models. Our main contributions include:• We propose a shared "bridging" VAE to transfer between latent generative models. Locality and semantic alignment of transformations are encouraged by applying a sliced-wasserstein distance, and a classification loss respectively to the shared latent space.• We demonstrate with qualitative and quantitative that our proposed method enables transfer both within a modality (image-to-image) and between modalities (image-to-audio).• Since we training a smaller secondary model in latent space, we find improvements in training efficiency, measured by both in terms of the amount of required labeled data and well training time. Figure 1 diagrams our hierarchical approach. We first independently pre-train separate generative models, either a VAE or GAN, for both the source and target domain. For VAEs, data is encoded from the data domain to the latent space through a learned encoding function z ∼ q(z|x), and decoded back to the data space with a decoder functionx ∼ g(x|z). For GANs, we choose latent samples from a spherical Gaussian prior z ∼ p(z) and then use rejection sampling to only select latent samples whose associated data x = g(z) is classified with high confidence by an auxiliary classifier. We then add the bridging conditional VAE with shared weights, tasked with modeling both latent spaces z 1, z 2. The VAE has a single shared latent space z, that corresponds to both domains. Sharing the weights encourages the model to seek common structures between the latent domains, but we also find it helpful to condition both the encoder q shared (z |z, D) and decoder g shared (z|z, D), with an additional one-hot domain label, D, to allow the model some flexibility to adapt to variations particular to each domain. While the low-level VAEs have a spherical Gaussian prior, we penalize the KL-Divergence to be less than 1, allowing the models to achieve better reconstructions and retain some structure of the original dataset for the bridging VAE to model. Full architectural and training details can be found in the Appendix. The conditional bridging VAE objective consists of three types of loss loss terms:1. Evidence Lower Bound (ELBO). Standard value that is maximized for training a VAE. DISPLAYFORM0 where the likelihood π(z; g) is a spherical Gaussian N (z; g, σ 2 I), and σ and β KL are hyperparmeters set to 1 and 0.1 respectively to encourage reconstruction accuracy. 2. Sliced Wasserstein Distance (SWD) BID2. The distribution distance between mini-batches of samples from each domain in the shared latent space (z 1, z 2). DISPLAYFORM1 where Ω is a set of random unit vectors, proj(A, a) is the projection of A on vector a, and W 2 2 (A, B) is the quadratic Wasserstein distance. 3. Classification Loss (Cls). For each domain d ∈ {1, 2}, we enforce semantic alignment with attribute labels y and a classification loss in the shared latent space: DISPLAYFORM2 where H is the cross entropy loss, f (z) is a one-layer linear classifier. The shared latent space where the blue line is the decision boundary of the classifier. Here, the points from both domains are overlapping, class-separated, spread evenly, and maintain the continuity of color gradient. Including terms for both domains, the total training loss is then DISPLAYFORM0 Where β SWD and β Cls are scalar loss weights. The transfer procedure is illustrated FIG1 using synthetic data. For reconstructions, data x 1 is passed through two encoders, DISPLAYFORM1 For transformations, the encoding is the same, but decoding uses decoders (and conditioning) from the second domain,ẑ 2 ∼ g shared (ẑ 2 |z, D = 2),x 2 ∼ g(x 2 |ẑ 2). Further analysis of this example and intuition behind loss terms is summarized in FIG1 and detailed in Appendix A. We discuss two aspects of existing research focuses that are related to our work, followed by how our work differentiates itself from them in order to deal with the challenge identified in this paper. Deep latent generative models are usually constructed to transfer a simple, tractable distribution p(z) into the approximation of population distribution p * (x), through an expressive neural network function. Such models include VAE BID15 and GAN BID10. GANs are trained with an accompany classifier that attempts to distinguish between samples from the decoder and the true dataset. VAEs, in contrast, are trained with an encoder distribution q(z|x) as an approximation to the posterior p(z|x) using variational approximation through the use of evidence lower bound (ELBO). These classes of models have been thoroughly investigated in many applications and variants BID11 BID17 BID1 including conditional generation BID22, generation of one domain conditioned on another BID4 BID24, generation of high-quality images BID13 and long-range structure in music. In terms of overall VAE structure, BID29 studies options to build hierarchical VAEs. The domain transfer enables transfer between images BID12 BID18, audio BID23 and video BID26, intuitively mapping between two domains where the variations in one domain should be reflected in the other. Besides visually appealing , domain transfer also enables application such as image colorization BID28. Domain transfer is also proposed to be done through jointly training of generative models BID21. Also, the behavior of domain transfer models also attracts attention. For example, BID3 suggests that image transfer does only local, texture level transfer. To enable transfer between possibly drastically different domains, Our work proposes to use VAE in modeling the latent space of pre-trained generative models, in several aspects differentiating itself from related work. Generally, the modeling of latent space is different from modeling directly the data domains as most of latent generative models naturally do, also, the transfer in a more abstract semantics and between heterogeneous domains differs from most domain transfer method which focusing on locality and similar domains. More specifically, regarding modeling latent spaces, BID29 suggests training a hierarchical VAE on a single domain should be done end-to-end, whose extension to multiple domains seems non-trivial and likely to suffer from data efficient issues. Instead, our proposed work, though enabling separation of pre-trained model and conditional shared VAE, apply to domain transfer setting while overcoming this shortcoming. Moreover, regarding shared latent space, BID20 proposes to use shared latent space for two generative models on data domains. It requires joint training of generative models on both domains using dense supervision which is infeasible for drastically different domains. Our work that leverages pre-trained generative model and model the latent space instead of data domains addresses to this limitation. While the end goal of our method is to enable creative mapping between datasets with arbitrary alignments, for quantitative studies we restrict ourselves to three domains where there exist a somewhat natural alignment to compare against:1. MNIST , which contains images of hand-written digits of 10 classes from "0" to "9". 2. Fashion MNIST BID27, which contains fashion related objects such as shoes, tshirts, categorized into 10 classes. The structure of data and the size of images are identical to MNIST. 3. SC09, a subset of Speech Commands Dataset 1, which contains the record of audio of humans saying digits from "0" to "'9". For MNIST and Fashion MNIST, we prepare VAE with MLP encoder and decoder following setting up in. More specifically, we use stacks of fully-connected linear layers activated by ReLU, together with a "Gated Mixing Layer". The full network architecture of the bridging VAE is detailed in Appendix B. For SC09 we use the publicly available WaveGAN BID6 2. We would like to emphasize that we only use class level supervision for enforcing semantic alignment with the latent classifier. We examine three scenarios of domain transfer:1. MNIST ↔ MNIST. We first train two lower-level VAEs from different initial conditions. The bridging autoencoder is then tasked with transferring between latent spaces while maintaining the digit class from source to target. 2. MNIST ↔ Fashion MNIST. In this scenario, we specify a global one-to-one mapping between 10 digit classes and 10 fashion object classes (See TAB3 in Appendix for details).Under review as a conference paper at ICLR 2019The bridging autoencoder is tasked with preserving this mapping as it transfers between images of digits and clothing.3. MNIST ↔ SC09. For the speech dataset, we first train a GAN to generate audio waveforms BID6 of spoken digits. We chose to use a WaveGAN because we wanted a global latent variable for the full waveform (as opposed to a distributed latent code as in BID7). It also gives us an opportunity to explore transferring between different classes of models. The bridging autoencoder is then tasked with transferring between a VAE of written digits and a GAN of spoken digits. For reconstructions and domain transfer, we present both qualitative and quantitative . Quantitative measurements of semantic alignment are performed with pre-trained classifiers in each data domain. Given that the datasets have pre-aligned classes, when evaluating transferring from data x d1 to x d2, the reported accuracy is the portion of instances that x d1 and x d2 have the same predicted class. Qualitative reconstruction are shown in FIG2 and the quantitative reconstruction accuracies are given in TAB0. For domain transfer, qualitative are shown in FIG4 and the quantitative transfer accuracies are given in Table 2. Within each group, on the left is the data in the source domain and on the right is the data in the target domain. We see that transfer maintains the label, yet still maintains diversity of samples, reflecting the transfer of a broad range of attributes. Table 2: Domain Transfer Accuracy for MNIST ↔ MNIST, MNIST ↔ Fashion MNIST and MNIST ↔ SC09 transfer respectively. We compare to pre-existing approaches trained on raw-pixels for MNIST ↔ Fashion MNIST only, MNIST → MNIST involves transferring between pretrained models with different initial conditions which is not directly comparable, and in MNIST → SC09, the two data domains were too distinct to provide any reasonable transfer with existing methods. Further comparisons can be found in Appendix C. DISPLAYFORM0 Interpolation can act as a good proxy for locality and local smoothness of the latent transformations, as by definition good interpolations require that small changes in the source domain are reflected by small changes in the target domain. We show inter-class and inter-class interpolation in FIG5 and FIG6 respectively. Particularly, we are interested in two comparing three rows of interpolations: the interpolation in the source domain's latent space, which acts a baseline for smoothness of interpolation for a pre-trained generative model, transfer fixed points to the target domain's latent space and interpolate in that space, and transfer all points of the source interpolation to the target domain's latent space, which shows how the transferring warps the latent space. We use spherical interpolation (e.g., √ pv 1 + (1 − p)v 2 )since we are interpolating in the Gaussian latent space. Note in FIG5 that the second and third rows have comparably smooth trajectories, reflecting that locality has been preserved. For inter-class interpolation in FIG5 interpolation is smooth within a class, but between classes the second row blurs pixels to create blurry combinations of digits, while the full transformation in the third row makes sudden transitions between classes. This is expected from our training procedure as the bridging autoencoder is modeling the marginal posterior of each latent space, and thus always stays on the manifold of the actual data during interpolation. Since our method is a semi-supervised method, we want to know how effectively our method leverages the labeled data. In Table 3 we show for the MNIST → MNIST setting the performance measured by transfer accuracy with respect to the number of labeled data points. Labels are distributed 3 In FIG2 and 4 we show the spectrum of audio samples for demonstration purpose. The corresponding audio samples themselves are available here: https://drive.google.com/drive/u/8/folders/ 12u6fKvg0St6gjQ_c2bThX9B2KRJb7Cvk FIG5, except that interpolation now happens between classes. It can be shown that, unlike regular generative model (row 1 and row 2 in each group) that exhibits pixel (data) level interpolation, especially the blurriness and distortion half way between instances of different labels, our proposed transfer (row 3) resorts to produce high-quality, in-domain data. This is an expected behavior since our proposed method learns to model the marginalized posterior of data distribution.evenly among classes. The accuracy of transformations grows monotonically with the number of labels and reaches over 50% with as few as 10 labels per a class. Without labels, we also observe accuracies greater than chance due to unsupervised alignment introduced by the SWD penalty in the shared latent space. Table 3: MNIST → MNIST transfer accuracy as a function of labeled data points. The supervised data points are split evenly among all 10 classes. Besides data efficiency, pre-training the base generative models has computational advantages. For large generative models that take weeks to train, it would be infeasible to retrain the entire model for each new cross-domain mapping. The bridging autoencoder avoids this cost by only retraining the latent transfer mappings. As an example from these experiments, training the bridging autoencoder for MNIST ↔ SC09 takes about one hour on a single GPU, while retraining the SC09 WaveGAN takes around four days. Finally, we perform an ablation study to confirm the benefits of each architecture component to transfer accuracy. For consistency, we stick to the MNIST → MNIST setting with fully labeled data. In Table 4, we see that the largest contribution to performance is the giving the bridging VAE a domain conditioning signal, allowing it to share weights between domains, but also adapt to the specific structure of each domain. Further, the increased overlap in the shared latent space induced by the SWD penalty is reflected in the greater transfer accuracies. Data Domain Vanilla, Unconditional VAE Conditional VAE Conditional VAE + SWD Accuracy 0.149 0.849 0.980 Table 4: Ablation study of MNIST → MNIST transfer accuracies. We have demonstrated an approach to learn mappings between disparate domains by bridging the latent codes of each domain with a shared autoencoder. We find bridging VAEs are able to achieve high transfer accuracies, smoothly map interpolations between domains, and even connect different model types (VAEs and GANs). Here, we have restricted ourselves to datasets with intuitive classlevel mappings for the purpose of quantitative comparisons, however, there are many interesting creative possibilities to apply these techniques between domains without a clear semantic alignment. As a semi-supervised technique, we have shown bridging autoencoders to require less supervised labels, making it more feasible to learn personalized cross-modal domain transfer based on the creative guidance of individual users. We want to archive following three goals for the proposed VAE for latent spaces:1. It should be able to model the latent space of both domains, including modeling local changes as well.2. It should encode two latent spaces in a way to enable domain transferability. This means encoded z 1 and z 2 in the shared latent space should occupy overlapped spaces.3. The transfer should be kept in the same class. That means, regardless of domain, zs for the same class should occupy overlapped spaces. With these goals in mind, we propose to use an optimization target composing of three kinds of losses. In the following text for notational convenience, we denote approximated posterior DISPLAYFORM0, 2}, the process of sampling z d from domain d.1. Modeling two latent spaces with local changes. VAEs are often used to model data with local changes in mind, usually demonstrated with smooth interpolation, and we believe this property also applies when modeling the latent space of data. Consider for each domain d ∈ {1, 2}, the VAE is fit to data to maximize the ELBO (Evidence Lower Bound) DISPLAYFORM1 where both q and g are fit to maximize L ELBO d. Notably, the latent space zs are continuous, so we choose the likelihood π(z; g) to be the product of N (z; g, σ 2 I), where we set σ to be a constant that effectively sets log π(z; g) = ||z − g|| 2, which is the L2 loss in FIG0. Also, D KL is denoted as KL loss in FIG0.2. Cross-domain overlapping in shared latent space. Formally, we propose to measure the cross-domain overlapping through the distance between following two distributions as a proxy: the distribution of z from source domain (e.g., z 1 ∼ Z 1) and that from the target domain (e.g., z 2 ∼ Z 1). We use Wasserstein Distance to measure the distance of two sets of samples (this notion straightforwardly applies to the mini-batch setting) S 1 and S 2, where S 1 is sampled from the source domain z 1 ∼ Z 1 and S 1 from the target domain z 2 ∼ Z d. For computational efficiency and inspired by BID5, we use SWD, or Sliced Wasserstein Distance BID2 between S 1 and S 2 as a loss term to encourage cross-domain overlapping in shared latent space. This means in practice we introduce the loss term DISPLAYFORM2 where Ω is a set of random unit vectors, proj(A, a) is the projection of A on vector a, and W 2 2 (A, B) is the quadratic Wasserstein distance, which in the one-dimensional case can be easily solved by monotonically pairing points in A and B, as proven in BID5.3. Intra-class overlapping in shared latent space. We want that regardless of domain, zs for the same class should occupy overlapped spaces, so that instance of a particular class should retain its label through the transferring. We therefore introduce the following loss term for both domain DISPLAYFORM3 where H is the cross entropy loss, f (z) is a one-layer linear classifier, and l x is the one-hot representation of label of x where x is the data associated with z. We intentionally make classifier f as simple as possible in order to encourage more capacity in the VAE instead of the classifier. Notably, unlike previous two categories of losses that are unsupervised, this loss requires labels and is thus supervised. In Figure 7 we show the intuition to design and the contribution to performance from each loss terms. DISPLAYFORM4 Figure 7: Synthetic data to demonstrate the transfer between 2-D latent spaces with 2-D shared latent space. Better viewed with color and magnifier. Columns (a) -(e) are synthetic data in latent space, reconstructed latent space points using VAE, domain 1 transferred to domain 2, domain 2 transferred to domain 1, shared latent space, respectively, follow the same arrangement as FIG1. Each row represent a combination of our proposed components as follows: Regular, unconditional VAE.Here transfer fails and the shared latent space are divided into region for two domains. Conditional VAE. Here exists an overlapped shared latent space. However the shared latent space are not mixed well. Conditional VAE + SWD. Here the shared latent space are well mixed, preserving the local changes across domain transfer. Conditional + SWD + Classification. This is the best scenario that enables both domain transfer and class preservation as well as local changes. It is also highlighted in FIG1. An overall observation is that each proposed component contributes positively to the performance in this synthetic data, which serves as a motivation for our decision to include all of them. The model architecture of our proposed VAE is illustrated in Figure B. The model relies on Gated Mixing Layers, or GML. We find empirically that GML improves performance by a large margin than linear layers, for which we hypothesize that this is because both the latent space (z 1, z 2) and the shared latent space z are Gaussian space, GML helps optimization by starting with a good initialization. We also explore other popular network components such as residual network and batch normalization, but find that they are not providing performance improvements. Also, the condition is fed to encoder and decoder as a 2-length one hot vector indicating one of two domains. For all settings, we use the dimension of shared latent space 100, β SWD = 1.0 and β CLs = 0.05, Specifically, for MNIST ↔ MNIST and MNIST ↔ Fashion MNIST, we use the dimension of shared latent space 8, 4 layers of FC (fully connected layers) of size 512 with ReLU, β KL = 0.05, β SWD = 1.0 and β CLs = 0.05; while for MNIST ↔ SC09, we use the dimension of shared latent space 16, 8 layers of FC (fully connected layers) of size 1024 with ReLU β KL = 0.01, β SWD = 3.0 and β CLs = 0.3. The difference is due to that GAN does not provide posterior, so the latent space points estimated by the classifier is much harder to model. For optimization, we use Adam optimizer BID14 with learning rate 0.001, beta 1 = 0.9 and beta 2 = 0.999. We train 50000 batches with batch size 128. We do not employ any other tricks for VAE training. We compare our with two existing approaches, Pix2Pix BID12 on the left and CycleGAN on the right, on the same MNIST ↔ Fashion MNIST transfer settings used in Figured 4. We show qualitative from applying Pix2Pix and CycleGAN in Figure 9, which can be compared with Figured 4, as well as quantitative in Table 5. Both qualitative and quantitative shows the limitation of existing methods and our proposed approach's advantage over them. Figure 9: Qualitative from applying Pix2Pix BID12 on the left and Cycle-GAN on the right, on the same settings used in Figured 4. Visually, both existing transfer approaches suffer from less desirable overall visual quality and less diversity in local changes, compared to our proposed approach. Particularly, Pix2Pix more or less makes semantic labels correct but suffers from mode collapses in each label, while CycleGAN has slightly better quality but suffers from label collapse, which is observable here that most of digits are transferred to Dress and leads to bad transfer accuracy. Transfer Accuarcy FID (Fréchet Inception Distance)Pix2Pix BID12 0.773 0.0786 CycleGAN 0.075 0.3333 This work 0.945 0.0055 Table 5: Quantitative of methods using Transfer Accuracy and Fréchet Inception Distance (FID). Transfer Accuracy is calculated using the same protocol as Table 2 where a higher value indicates the better performance, while FID is computed using the activation of the classifier on the target domain (Fashion MNIST) where a lower value indicates a better image quality. Quantitatively both existing methods perform worse than our proposed method. Here Pix2Pix more or less makes semantic labels correct but still suffers from lower accuracy and image quality. while CycleGAN suffers from label collapse, which leads to an even lower transfer accuracy and FID.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1xrb3CqtQ
Conditional VAE on top of latent spaces of pre-trained generative models that enables transfer between drastically different domains while preserving locality and semantic alignment.
We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains. AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies. Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information. The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets. Discrepancies exist between 1) the genomic data of pre-clinical and clinical datasets (the input space), and 2) the different measures of the drug response (the output space). To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies. Experimental indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately. Deep neural networks have demonstrated the state-of-the-art performance in different problems, ranging from computer vision and natural language processing to genomics and medicine . However, these networks often require a large number of samples for training, which is challenging and sometimes impossible to obtain in the real world applications. Transfer learning attempts to solve this challenge by leveraging the knowledge in a source domain, a large data-rich dataset, to improve the generalization performance on a small target domain. Training a model on the source domain and testing it on the target domain violates the i.i.d assumption that the train and test data are from the same distribution. The discrepancy in the input space decreases the prediction accuracy on the test data, which leads to poor generalization . Many methods have been proposed to minimize the discrepancy between the source and the target domains using different metrics such as Jensen Shannon Divergence , Maximum Mean Discrepancy , and Margin Disparity Discrepancy . While transductive transfer learning (e.g. domain adaptation) uses a labeled source domain to improve generalization on an unlabeled target domain, inductive transfer learning (e.g. few-shot learning) uses a labeled source domain to improve the generalization on a labeled target domain where label spaces are different in the source and the target domains . Adversarial domain adaptation has shown great performance in addressing the discrepancy in the input space for different applications (; ; ; ; ; ;, however, adversarial adaptation to address the discrepancies in both the input and output spaces has not yet been explored. Our motivating application is pharmacogenomics where the goal is to predict response to a cancer drug given the genomic data (e.g. gene expression). Since clinical datasets in pharmacogenomics (patients) are small and hard to obtain, many studies have focused on large pre-clinical pharmacogenomics datasets such as cancer cell lines as a proxy to patients . A majority of the current methods are trained on cell line datasets and then tested on other cell line or patient datasets ). However, cell lines and patients data, even with the same set of genes, do not have identical distributions due to the lack of an immune system and the tumor microenvironment in cell lines . Moreover, in cell lines, the response is often measured by the drug concentration that reduces viability by 50% (IC50), whereas in patients, it is often based on changes in the size of the tumor and measured by metrics such as response evaluation criteria in solid tumors (RECIST) . This means that drug response prediction is a regression problem in cell lines but a classification problem in patients. Therefore, discrepancies exist in both the input and output spaces in pharmacogenomics datasets. Table A1 provides the definition of these biological terms. In this paper, we propose Adversarial Inductive Transfer Learning (AITL), the first adversarial method of inductive transfer learning. Different from existing methods for transfer learning, AITL adapts not only the input space but also the output space. Our motivating application is transfer learning for pharmacogenomics datasets. In our driving application, the source domain is the gene expression data obtained from the cell lines and the target domain is the gene expression data obtained from patients. Both domains have the same set of genes (i.e., raw feature representation). Discrepancies exist between the gene expression data in the input space, and the measure of the drug response in the output space. AITL learns features for the source and target samples and uses these features as input for a multi-task subnetwork to predict drug response for both the source and the target samples. The output space discrepancy is addressed by the multi-task subnetwork, which has one shared layer and separate classification and regression towers, and assigns binary labels (called cross-domain labels) to the source samples. The multi-task subnetwork also alleviates the problem of small sample size in the target domain by sharing the first layer with the source domain. To address the discrepancy in the input space, AITL performs adversarial domain adaptation. The goal is that features learned for the source samples should be domain-invariant and similar enough to the features learned for the target samples to fool a global discriminator that receives samples from both domains. Moreover, with the cross-domain binary labels available for the source samples, AITL further regularizes the learned features by class-wise discriminators. A class-wise discriminator receives source and target samples from the same class label and should not be able to predict the domain accurately. We evaluated the performance of AITL and state-of-the-art inductive and adversarial transductive transfer learning baselines on pharmacogenimcs datasets in terms of the Area Under the Receiver Operating Characteristic curve (AUROC) and the Area Under the Precision-Recall curve (AUPR). In our experiments, AITL achieved a substantial improvement compared to the baselines, demonstrating the potential of transfer learning for drug response prediction, a crucial task of precision oncology. Following the notation of , a domain like DM is defined by a raw input feature space 1 X and a probability distribution p(X), where X = {x 1, x 2, ..., x n} and x i is the i-th raw feature vector of X. A task T is associated with DM = {X, p(X)}, where T = {Y, F } is defined by a label space Y and a predictive function F which is learned from training data of the form (X, Y), where X ∈ X and Y ∈ Y. A source domain is defined as DM S = {(x s1, y s1), (x s2, y s2),..., (x sn S, y sn S)} and a target domain is defined as DM T = {(x t1, y t1), (x t2, y t2),..., (x tn T, y tn T)}, where x s ∈ X S, x t ∈ X T, y s ∈ Y S, and y t ∈ Y T. Since n T << n S and it is challenging to train a model only on the target domain, transfer learning aims to improve the generalization on a target task T T using the knowledge in DM S and DM T and their corresponding tasks T S and T T. Transfer learning can be categorized into three categories: 1) unsupervised transfer learning, 2) transductive transfer learning, and 3) inductive transfer learning. In unsupervised transfer learning, there is no label in the source and target domains. In transductive transfer learning, source domain is labeled but target domain is unlabeled, domains can be either the same or different (domain adaptation), but source and target tasks are the same. In inductive transfer learning, target domain is labeled and source domain can be either labeled or unlabeled, and domains can be the same or different, but in this category tasks are always different . There are three approaches to inductive transfer learning: 1) deep metric learning, 2) few-shot learning, and 3) weight transfer . Deep metric learning methods are independent of the number of samples in each class of the target domain, denoted by k, meaning that they work for small and large k values. Few-shot learning methods focus on small k (k ≤ 20). Finally, weight transfer methods require a large k (k ≥ 100 or k ≥ 1000) . Figure A1 (in Appendix) presents this taxonomy. In drug response prediction, the target domain is small, which means a limited number of samples for each class is available, therefore, few-shot learning is more suitable for such a problem. Fewshot learning involves training a classifier to recognize new classes, provided only a small number of examples from each of these new classes in the training data. Various methods have been proposed for few-shot learning (; ;). For example, Prototypical Networks (ProtoNet) constructs prototypical representatives (class means) from source domain learned features and compares the Euclidean distance between the target domain learned features and these class representatives to assign labels to the target samples. Recent advances in adversarial learning leverage deep neural networks to learn transferable representation that disentangles domain-invariant and class-invariant features from different domains and matches them properly (; ;). In this section, we first introduce the Generative Adversarial Networks (GANs) , and then introduce some of the existing works on adversarial transfer learning. GANs attempt to learn the distribution of the input data via a minimax framework where two networks are competing: a discriminator D and a generator G. The generator tries to create fake samples from a randomly sampled latent variable that fool the discriminator, while the discriminator tries to catch these fake samples and discriminate them from the real ones. Therefore, the generator wants to minimize its error, while the discriminator wants to maximize its accuracy: A majority of literature on adversarial transfer learning are for transductive transfer learning where the source domain is labeled while the target domain is unlabeled. Transductive transfer learning, often referred to as domain adaptation, is the most common scenario in transfer learning. Various methods have been proposed for adversarial transductive transfer learning in different applications such as image segmentation ), image classification ), speech recognition , domain adaptation under label-shift , partial domain adaptation , and multiple domain adaptation . The idea of these methods is that features extracted from source and target samples should be similar enough to fool a global discriminator and/or class-wise discriminators. The goal of precision oncology is to tailor a treatment for a cancer patient using genomic information of that patient. However, currently, only about 5% of the patients can benefit from precision oncology because response to a drug is a highly complex phenotype and it depends on diverse genetic and/or non-genetic factors . Pre-clinical pharmacogenomics datasets such as cancer cell lines and patientderived xenografts (PDX) are reliable proxies to study the associations between the genomic landscape and the response to a cancer treatment. The advantage of these resources is that they can be screened with hundreds of drugs -chemotherapy agents and targeted therapeutics -which is impossible for patients. For example, in the Genomics of Drug Sensitivity in Cancer (GDSC) dataset over 1000 pan-cancer cell lines screened with 265 chemotherapy and targeted drugs are available. Another advantage of the pre-clinical datasets is that they are often significantly larger than patient datasets with known drug response (labels). These advantages of pre-clinical datasets make them a suitable resource to develop computational methods for drug response prediction . Various methods have been developed to predict drug response from single or multiple types of genomic data. For example, proposed a ridge-regression method to predict drug response based on gene expression data. showed that integrating multiple data types with deep neural networks and transfer learning via sample transfer improves the accuracy of drug response prediction. Given a labeled source domain DM S with a learning task T S and a labeled target domain DM T with a learning task T T, where T T = T S, and p(X T) = p(X S), where X S, X T ∈ X, we assume that the source and the target domains are not the same due to different probability distributions. The goal of Adversarial Inductive Transfer Learning (AITL) is to utilize the source and target domains and their tasks in order to improve the learning of In the area of pharmacogenomics, the source domain is the gene expression data obtained from the cell lines, and the source task is to predict the drug response in the form of IC50 values. The target domain consists of gene expression data obtained from patients, and the target task is to predict drug response in a different form -often change in the size of tumor after receiving the drug. In this setting, p(X T) = p(X S) because cell lines are different from patients even with the same set of genes. Additionally, Y T = Y S because for the target task Y T ∈ {0, 1}, drug response in patients is a binary outcome, but for the source task Y S ∈ R +, drug response in cell lines is a continuous outcome. As a , AITL needs to address these discrepancies in the input and output spaces. Our proposed AITL method takes input data from the source and target domains, and achieves the following three objectives: first, it makes predictions for the target domain using both of the input domains and their corresponding tasks, second, it addresses the discrepancy in the output space between the source and target tasks, and third, it addresses the discrepancy in the input space. AITL is a neural network consisting of four components: • The feature extractor receives the input data from the source and target domains and extracts salient features, which are then sent to the multi-task subnetwork component. • The multi-task subnetwork takes the extracted features of source and target samples and maps them to their corresponding labels and makes predictions for them. This component has a shared layer and two task-specific towers for regression (source task) and classification (target task). Therefore, by training the multi-task subnetwork on the source and target samples, it addresses the small sample size challenge in the target domain. In addition, it also addresses the discrepancy in the output space by assigning cross-domain labels (binary labels in this case) to the source samples (for which only continuous labels are available) using its classification tower. • The global discriminator receives extracted features of source and target samples and predicts if an input sample is from the source or the target domain. To address the discrepancy in the input space, these features should be domain-invariant so that the global discriminator cannot predict their domain labels accurately. This goal is achieved by adversarial learning. • The class-wise discriminators further reduce the discrepancy in the input space by adversarial learning at the level of the different classes, i.e., extracted features of source and target samples from the same class go to the discriminator for that class and this discriminator should not be able to predict if an input sample from a given class is from the source or the target domain. The AITL cost function consists of a classification loss, a regression loss, and global and class-wise discriminator adversarial losses and is optimized end-to-end. An overview of the proposed method is presented in figure 1. Figure 1: Overview of AITL: First, the feature extractor receives source and target samples and learns feature for them. Then, the multi-task subnetwork uses these features to make predictions for the source and target samples and also assigns cross-domain labels to the source samples. The multi-task subnetwork addresses the discrepancy in the output space. Finally, to address the input space discrepancy, global and class-wise discriminators receive the extracted features and regularize the feature extractor to learn domain-invariant features. To learn salient features in lower dimensions for the input data, we design a feature extractor component. The feature extractor is a one-layer fully-connected subnetwork with batch normalization and the ReLU activation function that receives both the source and target samples as input. We denote the feature extractor as f : where Z denotes the extracted features for input X which is from either the source (S) or the target (T) domain. In our driving application, the feature extractor learns features for the cell line and patient data. After extracting features of the input samples, we want to use these learned features to 1) make predictions for target samples, and 2) address the discrepancy between the source and the target domains in the output space. To achieve these goals, a multi-task subnetwork with a shared layer g and two task-specific towers M S and M T is designed, where M S is for regression (the source task) and M T is for classification (the target task): The performance of the multi-task subnetwork component is evaluated based on a binary-cross entropy loss for the classification task on the target samples and a mean squared loss for the regression task on the source samples: Where Y S and Y T are the true labels of the source and the target samples, respectively, and L BCE and L M SE are the corresponding losses for the target and the source domains, respectively. The multi-task subnetwork component outputs 1) the predicted labels for the target samples, and 2) the assigned cross-domain labels for the source samples. The classification tower in the multitask subnetwork makes predictions for the source samples and assigns binary labels (responder or non-responder) because such labels do not exist for the source samples. Therefore, the multi-task subnetwork adapts the output space of the source and the target domains by assigning cross-domain labels to the source domain. The multi-task subnetwork has a shared fully-connected layer with batch normalization and the ReLU activation function. The regression tower has two layers with batch normalization and the ReLU activation function. The classification tower also has two fully connected layer with batch normalization and the ReLU activation function in the first layer and the Sigmoid activation function in the second layer. In our driving application the multi-task subnetwork predicts IC50 values for the cell lines and the binary response outcome for the patients. Moreover, it also assigns binary labels to the cell lines which is similar to those of the patients. The goal of this component is to address the discrepancy in the input space by adversarial learning of domain-invariant features. To achieve this goal, a discriminator receives source and target extracted features from the feature extractor and classifies them into their corresponding domain. The feature extractor should learn domain-invariant features to fool the global discriminator. In our driving application the global discriminator should not be able to recognize if the extracted features of a sample are from a cell line or a patient. This discriminator is a one-layer subnetwork with the Sigmoid activation function denoted by D G . The adversarial loss for D G is as follows: With cross-domain binary labels available for the source domain, AITL further reduces the discrepancy between the input domains via class-wise discriminators. The goal is to learn domain-invariant features with respect to specific class labels such that they fool corresponding class-wise discriminators. Therefore, extracted features of the target samples in class i, and those of the source domain which the multi-task subnetwork assigned to class i, will go to the discriminator for class i. We denote such a class-wise discriminator as DC i. The adversarial loss for DC i is as follows: In our driving application the class-wise discriminator for the responder samples should not be able to recognize if the extracted features of a responder sample are from a cell line or a patient (similarly for a non-responder sample). Similar to the global discriminator, class-wise discriminators are also one-layer fully-connected subnetworks with the Sigmoid activation function. To optimize the entire network in an end-to-end fashion, we design the cost function as follows: Where, λ G and λ DC are adversarial regularization coefficients for the global and class-wise discriminators, respectively. In our experiments, we used the following datasets (See Table A2 in the Appendix for more detail): The GDSC dataset was used as the source domain, and all the other datasets were used as the target domain. For the GDSC dataset, raw gene expression data were downloaded from ArrayExpress (E-MTAB-3610) and release 7.0 of the dataset was used to obtain the response outcome. Gene expression data of TCGA patients were downloaded from the Firehose Broad GDAC and the response outcome was obtained from . Patient datasets from clinical trials were obtained from the Gene Expression Omnibus (GEO), and the PDX dataset was obtained from the supplementary material of . For each drug, we selected those patient datasets that applied a comparable measure of the drug response. For preprocessing, the same procedure was adopted as described in the supplementary material of for the raw gene expression data (normalized and z-score transformed) and the drug response data. After the preprocessing, source and target domains had the same number of genes. We designed our experiments to answer the following three questions: 1. Does AITL outperform baselines that are trained only on cell lines and then evaluated on patients (without transfer learning)? To answer this question, we compared AITL against and ) (MOLI) which are state-of-the-art methods of drug response prediction that do not perform domain adaptation. 2. Does AITL outperform baselines that adopt adversarial transductive transfer learning (without adaptation of the output space)? To answer this question, we compared AITL against ) (ADDA) and, state-of-the-art methods of adversarial transductive transfer learning with global and class-wise discriminators, respectively. 3. Does AITL outperform a baseline for inductive transfer learning? To answer this last question, we compared AITL against (ProtoNet) which is the state-of-the-art inductive transfer learning method for small numbers of examples per class. Based on the availability of patient/PDX datasets for a drug, we experimented with four different drugs: Bortezomib, Cisplatin, Docetaxel, and Paclitaxel. It is important to note that these drugs have different mechanisms and are being prescribed for different cancers. For example, Docetaxel is a chemotherapy drug mostly known for treating breast cancer patients , while Bortezomib is a targeted drug mostly used for multiple myeloma patients . Therefore, the datasets we have selected cover different types of anti-cancer drugs. In addition to the experimental comparison against published methods, we also performed an ablation study to investigate the impact of the different AITL components separately. AITL−AD denotes a version of AITL without the adversarial adaptation components, which means the network only contains the multi-task subnetwork. AITL−D G denotes a version of AITL without the 0.54±0.07 0.60±0.14 0.52±0.02 0.58±0.04 ADDA 0.51±0.06 0.56±0.06 0.48±0.06 did not converge ProtoNet 0 global discriminator, which means the network only employs the multi-task subnetwork and classwise discriminators. AITL−DC denotes a version of AITL without the class-wise discriminators, which means the network only contains the multi-task subnetwork and the global discriminator. All of the baselines were trained on the same data, tested on patients/PDX for these drugs, and eventually compared to AITL in terms of prediction AUROC and AUPR. Since the majority of the studied baselines cannot use the continuous IC50 values in the source domain, binarized IC50 labels provided by using the Waterfall approach were used to train them. Finally, for the minimax optimization, a gradient reversal layer was employed by AITL and the adversarial baselines . We performed 3-fold cross validation in the experiments to tune the hyper-parameters of AITL and the baselines based on the AUROC. Two folds of the source samples were used for training and the third fold for validation, similarly, two folds of the target samples were used for training and validation, and the third one for the test. The hyper-parameters tuned for AITL were the number of nodes in the hidden layers, learning rates, mini-batch size, weight decay coefficient, the dropout rate, number of epochs, and the regularization coefficients. We considered different ranges for each hyper-parameter and the final selected hyper-parameter settings for each drug and each method are provided in Section A.2 in the Appendix. Finally, each network was re-trained on the selected settings using the train and validation data together for each drug. We used Adagrad for optimizing the parameters of AITL and the baselines implemented in the PyTorch framework, except for the method of which was implemented in R. We used the author's implementations for the method of , MOLI, and ProtoNet. For ADDA, we used an existing implementation from https://github.com/jvanvugt/pytorch-domain-adaptation, and we implemented the method of from scratch. Tables 1 and A3 (Appendix) and Figure 2 report the performance of AITL and the baselines in terms of AUROC and AUPR, respectively. To answer the first experimental question, AITL was compared to the baselines which do not use any adaptation (neither the input nor the output space), i.e. the method of and MOLI, and AITL demonstrated a better performance in both AUROC and AUPR for all of the studied drugs. This indicates that addressing the discrepancies in the input and output spaces leads to better performance compared to training a model on the source domain and testing it on the target domain. To answer the second experimental question, AITL was compared to state-of-the-art methods of adversarial transductive transfer learning, i.e. ADDA and the method of, which address the discrepancy only in the input space. AITL achieved significantly better performance in AUROC for all of the drugs and for three out of four drugs in AUPR (the of for Cisplatin were very competitive with AITL). This indicates that addressing the discrepancies in the both spaces outperforms addressing only the input space discrepancy. Finally, to answer the last experimental question, AITL was compared to ProtoNet ) -a representative of inductive transfer learning with input space adaptation via few-shot learning. AITL outperformed this method in all of the metrics for all of the drugs. We note that the methods of drug response prediction without adaptation, namely the method of and MOLI, outperformed the method of inductive transfer learning based on few-shot learning (ProtoNet). Moreover, these two methods also showed a very competitive performance compared to the methods of adversarial transductive transfer learning (ADDA and the method of). For Paclitaxel, ADDA did not converge in the first step (training a classifier on the source domain), which was also observed in another study. ProtoNet also did not converge for this drug. We observed that AITL, using all of its components together, outperforms all the additional baselines omitting some of the components. This indicates the importance of both input and output space adaptation. The only exception was for the drug Paclitaxel, where AITL−D G outperforms AITL. We believe the reason for this is that this drug has the most heterogeneous target domain (see Table A1 in the appendix), and therefore the global discriminator component of AITL causes a minor decrease in the performance. All these indicate that addressing the discrepancies in the input and output spaces between the source and target domains, via the AITL method, leads to a better prediction performance. To our surprise, ProtoNet and ADDA could not outperform the method of and MOLI baselines. For ProtoNet, this may be due to the depth of the backbone network. A recent study has shown that a deeper backbone improves ProtoNet performance drastically in image classification. However, in pharmacogenomics, employing a deep backbone is not realistic because of the much smaller sample size compared to an image classification application. Another limitation for ProtoNet is the imbalanced number of training examples in different classes in pharmacogenomics datasets. Specifically, the number of examples per class in the training episodes is limited to the number of samples of the minority class as ProtoNet requires the same number of examples from each class. For ADDA, this lower performance may be due to the lack of end-to-end training of the classifier along with the global discriminator of this method. The reason is that end-to-end training of the classifier along with the discriminators improved the performance of the second adversarial baseline in AUROC and AUPR compared to ADDA. Moreover, the method of ) also showed a relatively better performance in AUPR compared to the method of and MOLI. In pharmacogenomics, patient datasets are small or not publicly available due to privacy and/or data sharing issues. We believe including more patient samples and more drugs will increase generalization capability. In addition, recent studies in pharmacogenomics have shown that using multiple genomic data types (known as multi-omics in genomics) works better than using only gene expression. In this work, we did not consider such data due to the lack of patient samples with multi-omics and drug response data publicly available; however, in principle, AITL also works with such data. Last but not least, we used pharmacogenomics as our motivating application for this new problem of transfer learning, but we believe that AITL can also be employed in other applications. For example, in slow progressing cancers such as prostate cancer, large patient datasets with gene expression and short-term clinical data (source domain) are available, however, patient datasets with long-term clinical data (target domain) are small. AITL may be beneficial to learn a model to predict these long-term clinical labels using the source domain and its short-term clinical labels (a). Moreover, AITL can also be applied to the diagnosis of rare cancers with a small sample size. Gene expression data of prevalent cancers with a large sample size, such as breast cancer, may be beneficial to learn a model to diagnose these rare cancers. In this paper, we introduced a new problem in transfer learning motivated by applications in pharmacogenomics. Unlike domain adaptation that only requires adaptation in the input space, this new problem requires adaptation in both the input and output spaces. To address this problem, we proposed AITL, an Adversarial Inductive Transfer Learning method which, to the best of our knowledge, is the first method that addresses the discrepancies in both the input and output spaces. AITL uses a feature extractor to learn features for target and source samples. Then, to address the discrepancy in the output space, AITL utilizes these features as input of a multi-task subnetwork that makes predictions for the target samples and assign cross-domain labels to the source samples. Finally, to address the input space discrepancy, AITL employs global and class-wise discriminators for learning domain-invariant features. In our motivating application, pharmacogenomics, AITL adapts the gene expression data obtained from cell lines and patients in the input space, and also adapts different measures of the drug response between cell lines and patients in the output space. In addition, AITL can also be applied to other applications such as rare cancer diagnosis or predicting long-term clinical labels for slow progressing cancers. We evaluated AITL on four different drugs and compared it against state-of-the-art baselines from three categories in terms of AUROC and AUPR. The empirical indicated that AITL achieved a significantly better performance compared to the baselines showing the benefits of addressing the discrepancies in both the input and output spaces. We conclude that AITL may be beneficial in pharmacogenomics, a crucial task in precision oncology. For future research directions, we believe that the TCGA dataset consisting of gene expression data of more than 12,000 patients (without drug response outcome) can be incorporated in an unsupervised transfer learning setting to learn better domain-invariant features between cell lines and cancer patients. In addition, we did not explore the impact of the chemical structures of the studied drugs in the prediction performance. We believe incorporating this input with transfer learning in the genomic level can lead to a better performance. Currently, AITL borrows information between the input domains indirectly via its multi-task subnetwork and assignment of cross-domain labels. An interesting future direction can be to exchange this information between domains in a more explicit way. Moreover, we also did not perform theoretical analysis on this new problem of transfer learning and we leave it for future work. Finally, we did not distinguish between different losses in the multi-task subnetwork, however, in reality patients are more important than cell lines, and considering a higher weight for the corresponding loss in the cost function can improve the prediction performance. A.1 SUPPLEMENTARY TABLES Patient-Derived Xenografts (PDX) Tumor tissue taken from a patient and implanted into mice to mimic the microenvironment around the tumor. Chemotherapy drugs A type of treatment that stops cancer cells' growth by killing them or stopping them from dividing. Targeted drugs A type of treatment that is designed for a specific type(s) of cancer cells with minor effect on the other cell types. clinical trial Bortezomib target 169 GDSC cell line Bortezomib source 391 GSE18864 clinical trial Cisplatin target 24 GSE23554 clinical trial Cisplatin target 28 TCGA patient Cisplatin target 66 GDSC cell line Cisplatin source 829 GSE25065 clinical trial Docetaxel target 49 GSE28796 clinical trial Docetaxel target 12 GSE6434 clinical trial Docetaxel target 24 TCGA patient Docetaxel target 16 GDSC cell line Docetaxel source 829 GSE15622 clinical trial Paclitaxel target 20 GSE22513 clinical trial Paclitaxel target 14 GSE25065 clinical trial Paclitaxel target 84 PDX animal (mouse) Paclitaxel target 43 TCGA patient Paclitaxel target 35 GDSC cell line Paclitaxel source 389
[ 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryeRn3NtPH
A novel method of inductive transfer learning that employs adversarial learning and multi-task learning to address the discrepancy in input and output space
Named entity recognition (NER) and relation extraction (RE) are two important tasks in information extraction and retrieval (IE & IR). Recent work has demonstrated that it is beneficial to learn these tasks jointly, which avoids the propagation of error inherent in pipeline-based systems and improves performance. However, state-of-the-art joint models typically rely on external natural language processing (NLP) tools, such as dependency parsers, limiting their usefulness to domains (e.g. news) where those tools perform well. The few neural, end-to-end models that have been proposed are trained almost completely from scratch. In this paper, we propose a neural, end-to-end model for jointly extracting entities and their relations which does not rely on external NLP tools and which integrates a large, pre-trained language model. Because the bulk of our model's parameters are pre-trained and we eschew recurrence for self-attention, our model is fast to train. On 5 datasets across 3 domains, our model matches or exceeds state-of-the-art performance, sometimes by a large margin. The extraction of named entities (named entity recognition, NER) and their semantic relations (relation extraction, RE) are key tasks in information extraction and retrieval (IE & IR). Given a sequence of text (usually a sentence), the objective is to identify both the named entities and the relations between them. This information is useful in a variety of NLP tasks such as question answering, knowledge base population, and semantic search . In the biomedical domain, NER and RE facilitate large-scale biomedical data analysis, such as network biology , gene prioritization , drug repositioning and the creation of curated databases . In the clinical domain, NER and RE can aid in disease and treatment prediction, readmission prediction, de-identification, and patient cohort identification . Most commonly, the tasks of NER and RE are approached as a pipeline, with NER preceding RE. There are two main drawbacks to this approach: Pipeline systems are prone to error propagation between the NER and RE systems. One task is not able to exploit useful information from the other (e.g. the type of relation identified by the RE system may be useful to the NER system for determining the type of entities involved in the relation, and vice versa). More recently, joint models that simultaneously learn to extract entities and relations have been proposed, alleviating the aforementioned issues and achieving state-of-the-art performance (; ; ; ; Adel & Schütze, 2017; a; b; ;). Many of the proposed joint models for entity and relation extraction rely heavily on external natural language processing (NLP) tools such as dependency parsers. For instance, propose a recurrent neural network (RNN)-based joint model that uses a bidirectional long-short term memory network (BiLSTM) to model the entities and a tree-LSTM to model the relations between entities; propose a similar model for biomedical text. The tree-LSTM uses dependency tree information extracted using an external dependency parser to model relations between entities. The use of these external NLP tools limits the effectiveness of a model to domains (e.g. news) where those NLP tools perform well. As a remedy to this problem, Bekoulis et al. (2018a) proposes a neural, end-to-end system that jointly learns to extract entities and relations without relying on external NLP tools. In Bekoulis et al. (2018b), they augment this model with adversarial training. propose a different, albeit similar end-to-end neural model which makes use of deep biaffine attention . approach the problem with multi-turn question answering, posing templated queries to a BERT-based QA model whose answers constitute extracted entities and their relations and achieve state-of-the-art on three popular benchmark datasets. While demonstrating strong performance, end-to-end systems like Bekoulis et al. (2018a; b) and suffer from two main drawbacks. The first is that most of the models parameters are trained from scratch. For large datasets, this can lead to long training times. For small datasets, which are common in the biomedical and clinical domains where it is particularly challenging to acquire labelled data, this can lead to poor performance and/or overfitting. The second is that these systems typically contain RNNs, which are sequential in nature and cannot be parallelized within training examples. The multi-pass QA model proposed in alleviates these issues by incorporating a pre-trained language model, BERT , which eschews recurrence for self-attention. The main limitation of their approach is that it relies on handcrafted question templates to achieve maximum performance. This may become a limiting factor where domain expertise is required to craft such questions (e.g., for biomedical or clinical corpora). Additionally, one has to create a question template for each entity and relation type of interest. In this study, we propose an end-to-end model for joint NER and RE which addresses all of these issues. Similar to past work, our model can be viewed as a mixture of a NER module and a RE module (Figure 1). Unlike most previous works, we include a pre-trained, transformer-based language model, specifically BERT , which achieved state-of-the-art performance across many NLP tasks. The weights of the BERT model are fine-tuned during training, and the entire model is trained in an end-to-end fashion. Our main contributions are as follows: Our solution is truly end-to-end, relying on no handcrafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers). Our model is fast to train (e.g. under 10 minutes on a single GPU for the CoNLL04 corpus), as most of its parameters are pre-trained and we avoid recurrence. We match or exceed state-of-the-art performance for joint NER and RE on 5 datasets across 3 domains. Figure 1 illustrates the architecture of our approach. Our model is composed of an NER module and an RE module. The NER module is identical to the one proposed by. For a given input sequence s of N word tokens w 1, w 2,..., w N, the pre-trained BERT BASE model first produces a sequence of vectors, x which are then fed to a feed-forward neural network (FFNN) for classification. The output size of this layer is the number of BIOES-based NER labels in the training data, |C (NER) |. In the BIOES tag scheme, each token is assigned a label, where the B-tag indicates the beginning of an entity span, I-the inside, E-the end and S-is used for any single-token entity. All other tokens are assigned the label O. During training, a cross-entropy loss is computed for the NER objective, where s (NER) n is the predicted score that token n ∈ N belongs to the ground-truth entity class and s is the predicted score for token n belonging to the entity class c ∈ C (NER). In the RE module, the predicted entity labels are obtained by taking the argmax of each score vector s. The predicted entity labels are then embedded to produce a sequence of fixed-length, continuous vectors, e,..., e (NER) N which are concatenated with the hidden states from the final layer in the BERT model and learned jointly with the rest of the models parameters. and , we incrementally construct the set of relation candidates, R, using all possible combinations of the last word tokens of predicted entities, i.e. words with E-or S-labels. An entity pair is assigned to a negative relation class (NEG) when the pair has no relation or when the predicted entities are not correct. Once relation candidates are constructed, classification is performed with a deep bilinear attention mechanism , as proposed by. To encode directionality, the mechanism uses FFNNs to project each x (RE) i into head and tail vector representations, corresponding to whether the i th word serves as head or tail argument of the relation. These projections are then fed to a biaffine classifier, where U is an m × |C (RE) | × m tensor, W is a |C (RE) | × (2 * m) matrix, and b is a bias vector. Here, m is the size of the output layers of FFNN head and FFNN tail and C (RE) is the set of all relation classes (including NEG). During training, a second cross-entropy loss is computed for the RE objective where is the predicted score that relation candidate r ∈ R belongs to the ground-truth relation class and s (RE) r,c is the predicted score for relation r belonging to the relation class c ∈ C (RE). The model is trained in an end-to-end fashion to minimize the sum of the NER and RE losses. 2.1 , entity pre-training is proposed as a solution to the problem of lowperformance entity detection in the early stages of training. It is implemented by delaying the training of the RE module by some number of epochs, before training the entire model jointly. Our implementation of entity pretraining is slightly different. Instead of delaying training of the RE module by some number of epochs, we weight the contribution of L RE to the total loss during the first epoch of training where λ is increased linearly from 0 to 1 during the first epoch and set to 1 for the remaining epochs. We chose this scheme because the NER module quickly achieves good performance for all datasets (i.e. within one epoch). In early experiments, we found this scheme to outperform a delay of a full epoch. We implemented our model in PyTorch using the BERT BASE model from the PyTorch Transformers library 1. Our model is available at our GitHub repository 2. Furthermore, we use NVIDIAs automatic mixed precision (AMP) library Apex 3 to speed up training and reduce memory usage without affecting task-specific performance. To demonstrate the generalizability of our model, we evaluate it on 5 commonly used benchmark corpora across 3 domains. All corpora are in English. Detailed corpus statistics are presented in Table A.1 of the appendix. The Automatic Content Extraction (ACE04) corpus was introduced by , and is commonly used to benchmark NER and RE methods. There are 7 entity types and 7 relation types. ACE05 builds on ACE04, splitting the Physical relation into two classes (Physical and PartWhole), removing the Discourse relation class and merging Employment-Membership-Subsidiary and Person-Organization-Affiliation into one class (Employment-Membership-Subsidiary). For ACE04, we follow by removing the Discourse relation and evaluating our model using 5-fold cross-validation on the bnews and nwire subsets, where 10% of the data was held out within each fold as a validation set. For ACE05, we use the same test split as. We use 5-fold cross-validation on the remaining data to choose the hyperparameters. Once hyperparameters are chosen, we train on the combined data from all the folds and evaluate on the test set. For both corpora, we report the micro-averaged F 1 score. We obtained the pre-processing scripts from 4. The CoNLL04 corpus was introduced in and consists of articles from the Wall Street Journal (WSJ) and Associated Press (AP). There are 4 entity types and 5 relation types. We use the same test set split as 5. We use 5-fold cross-validation on the remaining data to choose hyperparameters. Once hyperparameters are chosen, we train on the combined data from all folds and evaluate on the test set, reporting the micro-averaged F 1 score. The adverse drug event corpus was introduced by to serve as a benchmark for systems that aim to identify adverse drug events from free-text. It consists of the abstracts of medical case reports retrieved from PubMed 6. There are two entity types, Drug and Adverse effect and one relation type, Adverse drug event. Similar to previous work (; b), we remove ∼130 relations with overlapping entities and evaluate our model using 10-fold cross-validation, where 10% of the data within each fold was used as a validation set, 10% as a test set and the remaining data is used as a train set. We report the macro F 1 score averaged across all folds. The 2010 i2b2/VA dataset was introduced by for the 2010 i2b2/Va Workshop on Natural Language Processing Challenges for Clinical Records. The workshop contained an NER task focused on the extraction of 3 medical entity types (Problem, Treatment, Test) and an RE task for 8 relation types. In the official splits, the test set contains roughly twice as many examples as the train set. To increase the number of training examples while maintaining a rigorous evaluation, we elected to perform 5-fold cross-validation on the combined data from both partitions. We used 10% of the data within each fold as a validation set, 20% as a test set and the remaining data was used as a train set. We report the micro F 1 score averaged across all folds. To the best of our knowledge, we are the first to evaluate a joint NER and RE model on the 2010 i2b2/VA dataset. Therefore, we decided to compare to scores obtained by independent NER and RE systems. We note, however, that the scores of independent RE systems are not directly comparable to the scores we report in this paper. This is because RE is traditionally framed as a sentence-level classification problem. During pre-processing, each example is permutated into processed examples containing two "blinded" entities and labelled for one relation class. E.g. the example: "His PCP had recently started ciprofloxacin TREATMENT for a UTI PROBLEM " becomes "His PCP had recently started @TREATMENT$ for a @PROBLEM$", where the model is trained to predict the target relation type, "Treatment is administered for medical problem" (TrAP). This task is inherently easier than the joint setup, for two reasons: relation predictions are made on ground-truth entities, as opposed to predicted entities (which are noisy) and the model is only required to make one classification decision per pre-processed sentence. In the joint setup, a model must identify any number of relations (or the lack thereof) between all unique pairs of predicted entities in a given input sentence. To control for the first of these differences, we report scores from our model in two settings, once when predicted entities are used as input to the RE module, and once when ground-truth entities are used. Besides batch size, learning rate and number of training epochs, we used the same hyperparameters across all experiments (see Table A .2). Similar to , learning rate and batch size were selected for each dataset using a minimal grid search (see See Table A. 3). One hyperparameter selected by hand was the choice of the pre-trained weights used to initialize the BERT BASE model. For general domain corpora, we found the cased BERT BASE weights from to work well. For biomedical corpora, we used the weights from BioBERT , which recently demonstrated state-of-the-art performance for biomedical NER, RE and QA. Similarly, for clinical corpora we use the weights provided by , who pre-trained BERT BASE on PubMed abstracts and clinical notes from MIMIC-III 7. 4.1 JOINTLY LEARNING NER AND RE Table 1 shows our in comparison to previously published , grouped by the domain of the evaluated corpus. We find that on every dataset besides i2b2, our model improves NER performance, for an average improvement of ∼2%. This improvement is particularly large on the ACE04 and ACE05 corpora (3.98% and 2.41% respectively). On i2b2, our joint model performs within 0.29% of the best independent NER solution. For relation extraction, we outperform previous methods on 2 datasets and come within ∼2% on both ACE05 and CoNLL04. In two cases, our performance improvement is substantial, with improvements of 4.59% and 10.25% on the ACE04 and ADE corpora respectively. For i2b2, our score is not directly comparable to previous systems (as discussed in section 3.1.4) but will facilitate future comparisons of joint NER and RE methods on this dataset. By comparing overall performance, we find that our approach achieves new state-of-the-art performance for 3 popular benchmark datasets (ACE04, ACE05, ADE) and comes within 0.2% for CoNLL04. To determine which training strategies and components are responsible for our models performance, we conduct an ablation analysis on the CoNLL04 corpus (Table 2) Removing FFNN head and FFNN tail has, by far, the largest negative impact on performance. Interestingly, however, replacing FFNN head and FFNN tail with a single FFNN has only a small negative impact. This suggests that while these layers are very important for model performance, using distinct FFNNs for the projection of head and tail entities (as opposed to the same FFNN) is relatively much less important. The next most impactful ablation was entity pre-training, suggesting that low-performance entity detection during the early stages of training is detrimental to learning (see section 2.1). Finally, we note that the importance of entity embeddings is surprising, as a previous study has found that entity embeddings did not help performance on the CoNLL04 corpus (a), although their architecture was markedly different. We conclude that each of our ablated components is necessary to achieve maximum performance. One advantage of including a transformer-based language model is that we can easily visualize the attention weights with respect to some input. This visualization is useful, for example, in detecting model bias and locating relevant attention heads . Previous works have used such visualizations to demonstrate that specific attention heads mark syntactic dependency relations and that lower layers tend to learn more about syntax while higher layers tend to encode more semantics . In Figure 2 we visualize the attention weights of select layers and attention heads from an instance of BERT fine-tuned within our model on the CoNLL04 corpus. We display four patterns that are easily 14 -* To the best of our knowledge, there are no published joint NER and RE models that evaluate on the i2b2 2010 corpus. We compare our model to the state-of-the-art for each individual task (see section 3.1.4). ** We compare to the scores achieved by their BERT BASE model. interpreted: paying attention to the next and previous words, paying attention to the word itself, and paying attention to the end of the sentence. These same patterns have been found in pre-trained BERT models that have not been fine-tuned on a specific, supervised task , and therefore, are retained after our fine-tuning procedure. To facilitate further analysis of our learned model, we make available Jupyter and Google Colaboratory notebooks on our GitHub repository 8, where users can use multiple views to explore the learned attention weights of our models. We use the BertViz library to render the interactive, HTML-based views and to access the attention weights used to plot the heat maps. In this paper, we introduced an end-to-end model for entity and relation extraction. Our key contributions are: No reliance on any hand-crafted features (e.g. templated questions) or external NLP tools (e.g. dependency parsers). Integration of a pre-trained, transformer-based language model. State-of-the-art performance on 5 datasets across 3 domains. Furthermore, our model is inherently modular. One can easily initialize the language model with pre-trained weights better suited for a domain of interest (e.g. BioBERT for biomedical corpora) or swap BERT for a comparable language model (e.g. XLNet ). Finally, because of, our model is fast to train, converging in approximately 1 hour or less on a single GPU for all datasets used in this study. Our model out-performed previous state-of-the-art performance on ADE by the largest margin (6.53%). While exciting, we believe this corpus was particularly easy to learn. The majority of sentences (∼68%) are annotated for two entities (drug and adverse effect, and one relation (adverse drug event). Ostensibly, a model should be able to exploit this pattern to get near-perfect performance on the majority of sentences in the corpus. As a test, we ran our model again, this time using ground-truth entities in the RE module (as opposed to predicted entities) and found that the model very quickly reached almost perfect performance for RE on the test set (∼98%). As such, high performance on the ADE corpus is not likely to transfer to real-world scenarios involving the large-scale annotation of diverse biomedical articles. In our experiments, we consider only intra-sentence relations. However, the multiple entities within a document generally exhibit complex, inter-sentence relations. Our model is not currently capable of extracting such inter-sentence relations and therefore our restriction to intra-sentence relations will limit its usefulness for certain downstream tasks, such as knowledge base creation. We also ignore the problem of nested entities, which are common in biomedical corpora. In the future, we would like to extend our model to handle both nested entities and inter-sentence relations. Finally, given that multilingual, pre-trained weights for BERT exist, we would also expect our model's performance to hold across multiple languages. We leave this question to future work. A.1 CORPUS STATISTICS . Gradient normalization Γ = 1 Rescales the gradient whenever the norm goes over some threshold Γ . Weight decay 0.1 L2 weight decay.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkgqm0VKwB
A novel, high-performing architecture for end-to-end named entity recognition and relation extraction that is fast to train.
In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80\%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared. Recurrent Neural Networks (RNNs) achieve state-of-the-art performance on a wide range of sequence prediction tasks BID0 BID22 BID50 BID32. In this work we examine how to add uncertainty and regularisation to RNNs by means of applying Bayesian methods to training. This approach allows the network to express uncertainty via its parameters. At the same time, by using a prior to integrate out the parameters to average across many models during training, it gives a regularisation effect to the network. Recent approaches either justify dropout BID43 and weight decay as a variational inference scheme BID12, or apply Stochastic Gradient Langevin dynamics (, SGLD) to truncated backpropagation in time directly BID13. Interestingly, recent work has not explored further directly applying a variational Bayes inference scheme BID3 for RNNs as was done in BID14. We derive a straightforward approach based upon Bayes by Backprop that we show works well on large scale problems. Our strategy is a simple alteration to truncated backpropagation through time that in an estimate of the posterior distribution on the weights of the RNN. This formulation explicitly leads to a cost function with an information theoretic justification by means of a bits-back argument BID18 where a KL divergence acts as a regulariser. The form of the posterior in variational inference shapes the quality of the uncertainty estimates and hence the overall performance of the model. We shall show how performance of the RNN can be improved by means of adapting ("sharpening") the posterior locally to a batch. This sharpening adapts the variational posterior to a batch of data using gradients based upon the batch. This can be viewed as a hierarchical distribution, where a local batch gradient is used to adapt a global posterior, forming a local approximation for each batch. This gives a more flexible form to the typical assumption of Gaussian posterior when variational inference is applied to neural networks, which reduces variance. This technique can be applied more widely across other Bayesian models. The contributions of our work are as follows:• We show how Bayes by Backprop (BBB) can be efficiently applied to RNNs.• We develop a novel technique which reduces the variance of BBB, and which can be widely adopted in other maximum likelihood frameworks.• We improve performance on two widely studied benchmarks outperforming established regularisation techniques such as dropout by a big margin.• We introduce a new benchmark for studying uncertainty of language models. Bayes by Backprop BID14 is a variational inference BID46 scheme for learning the posterior distribution on the weights θ ∈ R d of a neural network. This posterior distribution is typically taken to be a Gaussian with mean parameter µ ∈ R d and standard deviation parameter σ ∈ R d, denoted N (θ|µ, σ 2). Note that we use a diagonal covariance matrix, and d -the dimensionality of the parameters of the network -is typically in the order of millions. Let log p(y|θ, x) be the log-likelihood of the model, then the network is trained by minimising the variational free energy: DISPLAYFORM0 where p(θ) is a prior on the parameters. Minimising the variational free energy is equivalent to maximising the log-likelihood log p(y|θ, x) subject to a KL complexity term on the parameters of the network that acts as a regulariser: DISPLAYFORM1 In the Gaussian case with a zero mean prior, the KL term can be seen as a form of weight decay on the mean parameters, where the rate of weight decay is automatically tuned by the standard deviation parameters of the prior and posterior. Please refer to the supplementary material for the algorithmic details on Bayes by Backprop. The uncertainty afforded by Bayes by Backprop trained networks has been used successfully for training feedforward models for supervised learning and to aid exploration by reinforcement learning agents BID30 BID21, but as yet, it has not been applied to recurrent neural networks. The core of an RNN, f, is a neural network that maps the RNN state s t at step t, and an input observation x t to a new RNN state s t+1, f: (s t, x t) → s t+1. The exact equations of an LSTM core can be found in the supplemental material Sec A.2.An RNN can be trained on a sequence of length T by backpropagation through by unrolling T times into a feedforward network. Explicitly, we set s i = f (s i−1, x i), for i = 1,..., T. We shall refer to an RNN core unrolled for T steps by s 1:T = F T (x 1:T, s 0). Note that the truncated version of the algorithm can be seen as taking s 0 as the last state of the previous batch, s T.RNN parameters are learnt in much the same way as in a feedforward neural network. A loss (typically after further layers) is applied to the states s 1:T of the RNN, and then backpropagation is used to update the weights of the network. Crucially, the weights at each of the unrolled steps are shared. Thus each weight of the RNN core receives T gradient contributions when the RNN is unrolled for T steps. Applying BBB to RNNs is depicted in FIG0 where the weight matrices of the RNN are drawn from a distribution (learnt by BBB). However, this direct application raises two questions: when to sample the parameters of the RNN, and how to weight the contribution of the KL regulariser of. We shall briefly justify the adaptation of BBB to RNNs, given in FIG0. The variational free energy of for an RNN on a sequence of length T is: DISPLAYFORM0 where p(y 1:T |θ, x 1:T) is the likelihood of a sequence produced when the states of an unrolled RNN F T are fed into an appropriate probability distribution. The parameters of the entire network are Sample ∼ N (0, I), ∈ R d, and set network parameters to θ = µ + σ. Sample a minibatch of truncated sequences (x, y). Do forward and backward propagation as normal, and let g be the gradient w.r.t θ. DISPLAYFORM0 be the gradients of log N (θ|µ, σ 2) − log p(θ) w.r.t. θ, µ and σ respectively. Update µ using the gradient DISPLAYFORM1 Update σ using the gradient θ. Although the RNN is unrolled T times, each weight is penalised just once by the KL term, rather than T times. Also clear from is that when a Monte Carlo approximation is taken to the expectation, the parameters θ should be held fixed throughout the entire sequence. DISPLAYFORM2 Two complications arise to the above naive derivation in practice: firstly, sequences are often long enough and models sufficiently large, that unrolling the RNN for the whole sequence is prohibitive. Secondly, to reduce variance in the gradients, more than one sequence is trained at a time. Thus the typical regime for training RNNs involves training on mini-batches of truncated sequences. Let B be the number of mini-batches and C the number of truncated sequences ("cuts"), then we can write as: DISPLAYFORM3 where the (b, c) superscript denotes elements of cth truncated sequence in the bth minibatch. Thus the free energy of mini-batch b of a truncated sequence c can be written as: DISPLAYFORM4 where w Finally, the question of when to sample weights follows naturally from taking a Monte Carlo approximations to: for each minibatch, sample a fresh set of parameters. The choice of variational posterior q(θ) as described in Section 3 can be enhanced by adding side information that makes the posterior over the parameters more accurate, thus reducing variance of the learning process. Akin to Variational Auto Encoders (VAEs) BID25 BID41, which propose a powerful distribution q(z|x) to improve the gradient estimates of the (intractable) likelihood function p(x), here we propose a similar approach. Namely, for a given minibatch of data (inputs and targets) (x, y) sampled from the training set, we construct such q(θ|(x, y)). Thus, we compute a proposal distribution where the latents (z in VAEs) are the parameters θ (which we wish to integrate out), and the "privileged" information upon which we condition is a minibatch of data. We could have chosen to condition on a single example (x, y) instead of a batch, but this would have yielded different parameter vectors θ per example. Conditioning on the full minibatch has the advantage of producing a single θ per minibatch, so that matrix-matrix operations can still be carried. This "sharpened" posterior yields more stable optimisation, a common pitfall of Bayesian approaches to train neural networks, and the justification of this method follows from strong empirical evidence and extensive work on VAEs. A challenging aspect of modelling the variational posterior q(θ|(x, y)) is the large number of dimensions of θ ∈ R d. When the dimensionality is not in the order of millions, a powerful non-linear function (such as a neural network) can be used which transforms observations (x, y) to the parameters of a Gaussian distribution, as proposed in BID25; BID41. Unfortunately, this neural network would have far too many parameters, making this approach unfeasible. Given that the loss − log p(y|θ, x) is differentiable with respect to θ, we propose to parameterise q as a linear combination of θ and g θ = −∇ θ log p(y|θ, x), both d-dimensional vectors. Thus, we can define a hierarchical posterior of the form DISPLAYFORM0 with µ, σ ∈ R d, and q(ϕ) = N (ϕ|µ, σ) -the same as in the standard BBB method. Finally, let * denote element-wise multiplication, we then have DISPLAYFORM1 where η ∈ R d is a free parameter to be learnt and σ 0 a scalar hyper-parameter of our model. η can be interpreted as a per-parameter learning rate. During training, we get θ ∼ q(θ|(x, y)) via ancestral sampling to optimise the loss DISPLAYFORM2 DISPLAYFORM3 where µ, σ, η are our model parameters, and p are the priors for the distributions defining q (for exact details of these distributions see Section 6). The constant C is the number of truncated sequences as defined in Section3. The bound on the true data likelihood which yields eq. is derived in Sec 4.1. Algorithm 1 presents how learning is performed in practice. Sample a minibatch (x, y) of truncated sequences. DISPLAYFORM0 As long as the improvement of the log likelihood log p(y|θ, x) term along the gradient g ϕ is greater than the KL cost added for posterior sharpening (KL [q(θ|ϕ, (x, y)) || p(θ|ϕ)]), then the lower bound in will improve. This justifies the effectiveness of the posterior over the parameters proposed in eq. 7 which will be effective as long as the curvature of log p(y|θ, x) is large. Since η is learnt, it controls the tradeoff between curvature improvement and KL loss. Studying more powerful parameterisations is part of future research. Unlike regular BBB where the KL terms can be ignored during inference, there are two options for doing inference under posterior sharpening. The first involves using q(ϕ) and ignoring any KL terms, similar to regular BBB. The second involves using q(θ|ϕ, (x, y)) which requires using the term KL [q(θ|ϕ, (x, y)) || p(θ|ϕ)] yielding an upper bound on perplexity (lower bound in log probability; see Section 4.2 for details). This parameterisation involves computing an extra gradient and incurs a penalty in training speed. A comparison of the two inference methods is provided in Section 6. Furthermore, in the case of RNNs, the exact gradient cannot be efficiently computed, so BPTT is used. Here we turn to deriving the training loss function we use for posterior sharpening. The basic idea is to take a variational approximation to the marginal likelihood p(x) that factorises hierarchically. Hierarchical variational schemes for topic models have been studied previously in BID40. Here, we shall assume a hierarchical prior for the parameters such that p(x) = p(x|θ)p(θ|ϕ)p(ϕ)dθdϕ. Then we pick a variational posterior that conditions upon x, and factorises as q(θ, ϕ|x) = q(θ|ϕ, x)q(ϕ). The expected lower bound on p(x) is then as follows: DISPLAYFORM0 DISPLAYFORM1 We note that the procedure of sharpening the posterior as explained above has similarities with other techniques. Perhaps the most obvious one is line search: indeed, η is a trained parameter that does line search along the gradient direction. Probabilistic interpretations have been given to line search in e.g. BID34, but ours is the first that uses a variational posterior with the reparametrization trick/perturbation analysis gradient. Also, the probabilistic treatment to line search can also be interpreted as a trust region method. Another related technique is dynamic evaluation BID37, which trains an RNN during evaluation of the model with a fixed learning rate. The update applied in this case is cumulative, and only uses previously seen data. Thus, they can take a purely deterministic approach and ignore any KL between a posterior with privileged information and a prior. As we will show in Section 6, performance gains can be significant as the data exhibits many short term correlations. Lastly, learning to optimise (or learning to learn) BID28 BID1 is related in that a learning rate is learned so that it produces better updates than those provided by e.g. AdaGrad BID9 or Adam BID24. Whilst they train a parametric model, we treat these as free parameters (so that they can adapt more quickly to the non-stationary distribution w.r.t. parameters). Notably, we use gradient information to inform a variational posterior so as to reduce variance of Bayesian Neural Networks. Thus, although similar in flavour, the underlying motivations are quite different. Applying Bayesian methods to neural networks has a long history, with most common approximations having been tried. BID5 propose various maximum a posteriori schemes for neural networks, including an approximate posterior centered at the mode. also suggest using second order derivatives in the prior to encourage smoothness of the ing network. BID18 proposed using variational methods for compressing the weights of neural networks as a regulariser. BID20 suggest an MDL loss for single layer networks that penalises non-robust weights by means of an approximate penalty based upon perturbations of the weights on the outputs.; investigated using the Laplace approximation for capturing the posterior of neural networks. investigated the use of hybrid Monte Carlo for training neural networks, although it has so far been difficult to apply these to the large sizes of networks considered here. More recently derived a variational inference scheme for neural networks and Blundell et al. FORMULA0 extended this with an update for the variance that is unbiased and simpler to compute. derives a similar algorithm in the case of a mixture posterior. Several authors have claimed that dropout BID43 and Gaussian dropout BID47 can be viewed as approximate variational inference schemes BID11 , respectively FORMULA0 proposed a variational scheme with biased gradients for the variance parameter using the Fisher matrix. Our work extends this by using an unbiased gradient estimator without need for approximating the Fisher and also add a novel posterior approximation. Variational methods typically underestimate the uncertainty in the posterior (as they are mode seeking, akin to the Laplace approximation), whereas expectation propagation methods often average over modes and so tend to overestimate uncertainty (although there are counter examples for each depending upon the particular factorisation and approximations used; see for example BID44). Nonetheless, several papers explore applying expectation propagation to neural networks: BID42 derive a closed form approximate online expectation propagation algorithm, whereas Hernández- proposed using multiple passes of assumed density filtering (in combination with early stopping) attaining good performance on a number of small data sets. BID16 derive a distributed expectation propagation scheme with SGLD BID48 as an inner loop. Others have also considered applying SGLD to neural networks BID27 and BID13 more recently used SGLD for LSTMs (we compare to these in our experiments). We present the of our method for a language modelling and an image caption generation task. We evaluated our model on the Penn Treebank BID35 benchmark, a task consisting on next word prediction. We used the network architecture from BID50, a simple yet strong baseline on this task, and for which there is an open source implementation 1. The baseline consists of an RNN with LSTM cells and a special regularisation technique, where the dropout operator is only applied to the non-recurrent connections. We keep the network configuration unchanged, but instead of using dropout we apply our Bayes by Backprop formulation. Our goal is to demonstrate the effect of applying BBB to a pre-existing, well studied architecture. To train our models, we tuned the parameters on the prior distribution, the learning rate and its decay. The weights were initialised randomly and we used gradient descent with gradient clipping for optimisation, closely following BID50's "medium" LSTM configuration (2 layers with 650 units each). As in, the prior of the network weights θ was taken to be a scalar mixture of two Gaussian densities with zero mean and variances σ 2 1 and σ 2 2, explicitly DISPLAYFORM0 where θ j is the j-th weight of the network. We searched π ∈ {0.25, 0.5, 0.75}, log σ 1 ∈ {0, −1, −2} and log σ 2 ∈ {−6, −7, −8}.For speed purposes, during training we used one sample from the posterior for estimating the gradients and computing the (approximate) KL-divergence. For prediction, we experimented with either computing the expected loss via Monte Carlo sampling, or using the mean of the posterior distribution as the parameters of the network (MAP estimate). We observed that the improved as we increased the number of samples but they were not significantly better than taking the mean (as was also reported by BID14). For convenience, in TAB2 we report our numbers using the mean of the converged distribution, as there is no computation overhead w.r.t. a standard LSTM model. TAB2 compares our to the LSTM dropout baseline BID50 we built from, and to the Variational LSTMs BID12, which is another Bayesian approach to this task. Finally, we added dynamic evaluation BID37 with a learning rate of 0.1, which was found via cross validation. As with other VAE-related RNNs BID10; BID2 BID7 perplexities using posterior sharpening are reported including a KL penalty KL [q(θ|ϕ, (x, y)) || p(θ|ϕ)] in the log likelihood term (the KL is computed exactly, not sampled). For posterior sharpening we use a hierarchical prior for θ: p(θ|ϕ) = N (θ|ϕ, σ 2 0 I) which expresses our belief that a priori, the network parameters θ will be much like the data independent parameters ϕ with some small Gaussian perturbation. In our experiments we swept over σ 0 on the validation set, and found σ 0 = 0.02 to perform well, although were not particularly sensitive to this. Note that with posterior sharpening, the perplexities reported are upper bounds (as the likelihoods are lower bounds).Lastly, we tested the variance reduction capabilities of posterior sharpening by analysing the perplexity attained by the best models reported in TAB2. Standard BBB yields 258 perplexity after only one epoch, whereas the model with posterior sharpening is better at 227. We also implemented it on MNIST following, and obtained small but consistent speed ups. Lower perplexities on the Penn Treebank task can be achieved by varying the model architecture, which should be complementary to our work of treating weights as random variables-we are simply interested in assessing the impact of our method on an existing architecture, rather than absolute state-of-the-art. See BID23; Zilly et al. FORMULA0; BID36, for a report on recent advances on this benchmark, where they achieve perplexities of 70.9 on the test set. Furthermore we note that the speed of our naïve implementation of Bayesian RNNs was 0.7 times the original speed and 0.4 times the original speed for posterior sharpening. Notably, FIG2 shows the effect of weight pruning: weights were ordered by their signal-to-noise ratio (|µ i |/σ i) and removed (set to zero) in reverse order. We evaluated the validation set perplexity for each proportion of weights dropped. As can be seen, around 80% of the weights can be removed from the network with little impact on validation perplexity. Additional analysis on the existing patterns of the dropped weights can be found in the supplementary material A.3. We used the Penn Treebank test set, which is a long sequence of ≈ 80K words, and reversed it. Thus, the "reversed" test set first few words are: "us with here them see not may we..." which correspond to the last words of the standard test set: "... we may not see them here with us".Let V be the vocabulary of this task. For a given input sequence x = x 1:T and a probabilistic model p, we define the entropy of x under p, H p [x], by DISPLAYFORM0 DISPLAYFORM1, i.e., the per word entropy. Let X be the standard Penn Treebank test set, and X rev the reversed one. For a given probabilistic model p, we define the entropy gap ∆H p by DISPLAYFORM2 Since X rev clearly does not come from the training data distribution (reversed English does not look like proper English), we expect ∆H p to be positive and large. Namely, if we take the per word entropy of a model as a proxy for the models' certainty (low entropy means the model is confident about its prediction), then the overall certainty of well calibrated models over X rev should be lower than over X. Thus, DISPLAYFORM3. When comparing two distributions, we expect the better calibrated one to have a larger ∆H p.In FIG3, we plotted ∆H p for the BBB and the baseline dropout LSTM model. The BBB model has a gap of about 0.67 nats/word when taking 10 samples, and slightly below 0.65 when using the posterior mean. In contrast, the model using MC Dropout BID11 is less well calibrated and is below 0.58 nats/word. However, when "turning off" dropout (i.e., using the mean field approximation), ∆H p improves to below 0.62 nats/word. We note that with the empirical likelihood of the words in the test set with size T (where for each word w ∈ V, p(w) = (# occurrences of w) T ), we get an entropy of 6.33 nats/word. The BBB mean model has entropy of 4.48 nats/word on the reversed set which is still far below the entropy we get by using the empirical likelihood distribution. We also applied Bayes by Backprop for RNNs to image captioning. Our experiments were based upon the model described in, where a state-of-the-art pre-trained convolutional neural network (CNN) was used to map an image to a high dimensional space, and this representation was taken to be the initial state of an LSTM. The LSTM model was trained to predict the next word on a sentence conditioned on the image representation and all the previous words in the image caption. We kept the CNN architecture unchanged, and used an LSTM trained using Bayes by Backprop rather than the traditional LSTM with dropout regularisation. As in the case for language modelling, this work conveniently provides an open source implementation 2. We used the same prior distribution on the weights of the network as we did for the language modelling task, and searched over the same hyper-parameters. We used the MSCOCO BID29 ) data set and report perplexity, BLUE-4, and CIDER scores on compared to the Show and Tell model We observe significant improvements in BLUE and CIDER, outperforming the dropout baseline by a large margin. Moreover, a random sample of the captions that were different for both the baseline and BBB is shown in FIG4. Besides the clear quantitative improvement, it is useful to visualise qualitatively the performance of BBB, which indeed generally outperforms the strong baseline, winning in most cases. As in the case of Penn Treebank, we chose a performant, open source model. Captioning models that use spatial attention, combined with losses that optimise CIDER directly (rather than a surrogate loss as we do) achieve over 100 CIDER points BID32 BID31. We have shown how to apply the Bayes by Backprop (BBB) technique to RNNs. We enhanced it further by introducing the idea of posterior sharpening: a hierarchical posterior on the weights of neural networks that allows a network to adapt locally to batches of data by a gradient of the model. We showed improvements over two open source, widely available models in the language modelling and image captioning domains. We demonstrated that not only do BBB RNNs often have superior performance to their corresponding baseline model, but are also better regularised and have superior uncertainty properties in terms of uncertainty on out-of-distribution data. Furthermore, BBB RNNs through their uncertainty estimates show signs of knowing what they know, and when they do not, a critical property for many real world applications such as self-driving cars, healthcare, game playing, and robotics. Everything from our work can be applied on top of other enhancements to RNN/LSTM models (and other non-recurrent architectures), and the empirical evidence combined with improvements such as posterior sharpening makes variational Bayes methods look very promising. We are exploring further research directions and wider adoption of the techniques presented in our work. The core of an RNN, f, is a neural network that maps the RNN state at step t, s t and an input observation x t to a new RNN state s t+1, f: (s t, x t) → s t+1.An LSTM core has a state s t = (c t, h t) where c is an internal core state and h is the exposed state. Intermediate gates modulate the effect of the inputs on the outputs, namely the input gate i t, forget gate f t and output gate o t. The relationship between the inputs, outputs and internal gates of an LSTM cell (without peephole connections) are as follows: As discussed in Section 6.1, for the Penn Treebank task, we have taken the converged model an performed weight pruning on the parameters of the network. Weights were ordered by their signalto-noise ratio (|µ i |/σ i) and removed (set to zero) in reverse order. It was observed that around 80% of the weights can be removed from the network with little impact on validation perplexity. In Figure 5, we show the patterns of the weights dropped for one of the LSTM cells from the model. DISPLAYFORM0 Figure 5: Pruning patterns for one LSTM cell (with 650 untis) from converged model with 80% of total weights dropped. A white dot indicates that particular parameter was dropped. In the middle column, a horizontal white line means that row was set to zero. Finally, the last column indicates the total number of weights removed for each row.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Hkp3uhxCW
Variational Bayes scheme for Recurrent Neural Networks
Over the passage of time Unmanned Autonomous Vehicles (UAVs), especially Autonomous flying drones grabbed a lot of attention in Artificial Intelligence. Since electronic technology is getting smaller, cheaper and more efficient, huge advancement in the study of UAVs has been observed recently. From monitoring floods, discerning the spread of algae in water bodies to detecting forest trail, their application is far and wide. Our work is mainly focused on autonomous flying drones where we establish a case study towards efficiency, robustness and accuracy of UAVs where we showed our well supported through experiments. We provide details of the software and hardware architecture used in the study. We further discuss about our implementation algorithms and present experiments that provide a comparison between three different state-of-the-art algorithms namely TrailNet, InceptionResnet and MobileNet in terms of accuracy, robustness, power consumption and inference time. In our study, we have shown that MobileNet has produced better with very less computational requirement and power consumption. We have also reported the challenges we have faced during our work as well as a brief discussion on our future work to improve safety features and performance. In modern era, UAVs have become very popular and have basic intelligence of being driven autonomously. Talking about ground traffic, these vehicles have limitations of physical paths and barriers. However, such is not the case with flying objects like drones as they do not suffer from such physical limitations. Autonomous Flying objects are in much of discussion these days and are striding all across the realm -traffic monitoring BID9, agriculture BID11, inventory management BID4, surveillance BID15, data mining, disaster response BID10, etc. As their areas of application increase, it becomes more important to find algorithms well suited for these kind of vehicles. Some applications may not require the drone to be extremely accurate, but may require it to work for longer durations e.g. in surveillance applications while others may require it to be very precise but may not require it to work for long duration e.g. in delivery of items. In the last decade, significant changes have been observed in the field of autonomous motion planning of vehicles, UAVs in particular. The motion planning of UAVs are distinctly difficult because of several complexities that comes with aerial vehicles. The salience of differential constraints, uncertainty in the vehicle state and limited knowledge about the environment makes it impossible to have a precise pre-computed plan to follow through. These differences considerably gave rise to various approaches and techniques for planning the motion of these unmanned autonomous vehicles. The ambiguity of different algorithms and its inherent adversity pose an intriguing scope for a bench-marking study to enhance accuracy, robustness, power consumption, safety and inference time and fine tuning it further. Throughout this paper, we bear in mind some of the generic characteristics and prerequisites relating to UAVs. The basic design of a UAV is modelled to have acceleration and velocity constraints. Furthermore, the higher-order differential constraints also associate themselves with the equation of motion of a drone. However, the coherent objective involved in all UAVs is to guide the vehicle towards a goal. In this paper, we introduce to the best of our knowledge, a very first comparative study of three algorithms in order to find a better motion control of a drone for detecting a trail. In order to be able to compare a set of algorithms in a meticulous way, it is necessary to establish their precision and robustness and to evaluate its power consumption as well as inference time. Along with these metrics established for a particular algorithm, it is also necessary to consider the distinct areas of its application. Only then, based on the requirements called for by a particular application, a reasonable opinion about an algorithm is to be formed. Our study covers recent developments and algorithms used in the area of trail detection by UAVs and runs down as comprehensively as possible what has been already upheld regarding these algorithms. We briefly review the recent applications of drones and Unmanned Aerial Vehicles(UAVs) and their challenges. Our work is partly inspired by the research of BID13 on a MAV(Micro Aerial Vehicles) system for autonomous trail. They introduced and trained a deep neural network named Trailnet to estimate the view orientation and lateral offset of the MAV with respect to the trail center. They ran all their vision systems in real time on NVIDIA Jetson TX1. We have presented a detailed study on how we can increase their accuracy and lower their training time. We selected InceptionResnet BID14 and over their proposed.Inception architecture BID14 has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-art performance in the 2015 ILSVRC challenge. Its performance was similar to the latest generation Inception-v3 network. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. In BID7 a class of efficient models called MobileNets was first presented for embedded vision applications. They are based on a streamlined architecture which uses depth-wise separable convolutions to build light weight deep neural networks. Two simple global hyper-parameters introduced efficiently trade off between latency and accuracy. MobileNets are built primarily from depth-wise separable convolutions initially introduced in Sif and subsequently used in Inception models BID8 ) to reduce the computation in the first few layers. In this paper, we have tried to put forward a comparative study to find an algorithm well suited for UAVs. For this task, we selected 3 algorithms from state-of-the-art namely Trailnet, InceptionResnet and Mobilenet. We have chosen Inception-ResNet and MobileNet for our experiments as they were proven to perform very well on classification tasks. Our aim is to train these algorithms offline, implement them on the drone and use their output commands to correctly navigate the drone through trails. We have used Udacity simulator to simulate the path generated by the output of these algorithms and to compare their robustness during long trails. An important goal of our method is to demonstrate the effectiveness of low cost systems for the complex task of flying an autonomous drone. In this section, we want to put forward the architecture (both hardware and software) that we have used for our research in order to achieve our . We will further discuss about the basis of choosing Jetson TX2 for computing and how it has shaped our upshot. Also, we will talk about using ROS together with Ubuntu L4T and the hardware configuration of our drone. ).We have also used a Windows 10, Intel(R) Xeon(R) Silver 4114 CPU @2.20GHz(20 CPUs), 2.2GHz, 32GB RAM with Nvidia Titan Xp GPU for training purposes. 3. ZED Camera: We have used ZED stereo camera, which is a standard 3D USB camera for object detection and image capturing. It adds depth perception, positional tracking and 3D mapping using advanced sensing technology based on human stereo vision. 4. Battery: We use Venom 35C, 14.8V, 5000 mAh lithium polymer battery which is best suitable for our drone 5. GPS: We use Holybro Ublox Neo-M8N GPS module which provides high sensitivity and minimal acquisition time while maintaining low system power. 1. Jetpack -Jetpack is an SDK released by NVIDIA and is the most comprehensive solution for building artificially intelligent applications in recent times jet. Our Jetson Developer Kit (TX2) on the drone was first flashed with the latest OS image -Ubuntu 16.04 (Xenial). We further used JetPack installer to install developer tools for both Host and the Drone (the developer kit on the drone). All the libraries, TensorRT, cuDNN, CUDA, OpenCV to name some were installed to jumpstart our development environment. For our case study, we used JetPack-L4T 3.2. 2. ROS -ROS or Robot Operating System is a collection of Linux based frameworks acting as robotics middleware. We have used ROS Kinetic Kame kin for hardware abstraction of Joystick and the Controller present on the drone. ROS also plays an important part in controlling devices at a very low-level as well as for transferring data. A ROS system typically consists of a number of independent nodes. For our environment, these nodes are MAVROS, Control Node, DNN, Camera and Joystick -each of which is able to communicate with each other using subscribe or publish messaging model. All nodes are registered with the master node (MAVROS for our case) which in turn helps them to find and communicate with each other. The MAVROS enables MAVLink (Micro Air Vehicle Link) mav protocol to communicate with the PX4 (FCU) on-board.3. Communication -The on-board Jetson TX2 had wifi access point enabled by us before installing it on the drone. As a , the host PC could connect to the Jetson TX2 wirelessly and was able to access it remotely. By sending commands through the terminal, the host was able to control the drone by Secure Socket Shell (SSH) network protocol. 4. Udacity Simulator -Learning about UAVs comes with a lot of risks considering the constant fright of crashing. From testing new hardwares, learning to control the flight controller unit of the drone to further fine tuning of our algorithms, the ing failures can be really expensive. To overcome this we have used a simulator offered by Udacity in its Flying Car nanodegree program. FIG2 demonstrates the state transition diagram of the drone using this simulator. We use standard UART(Universal Asynchronous Receiver/Transmitter) communications for controls. The transmitting UART converts parallel data from a controlling device like CPU into serial form, transmits it in serial form to receiving UART, which then converts the serial data back to parallel for the receiving device. FIG4 shows the workflow involving all the modules and communication between them. For our experiments, we use IDSIA trail dataset BID6 which were collected on Swiss Alps forest trails. It consists of approximately 15GB of image data acquired using different cameras. The entire dataset is divided into 15 parts, in the form of folders numbered from 000 till 014. We have used folders 001, 002, 004, 005, 006, 007 and 009 for training our model and used folders 003, 008 and 010 for validation. We ran our tests on folder 012. We had to discard folders 000, 013 and 014 as they comprise of only preliminary test data. In this section we first go through the model selections that we made by considering their performances on ImageNet Challenge BID12. ImageNet project in an ongoing large visual database designed for use in visual object recognition software research. Since 2010, ImageNet has been running an annual competition in visual recognition where participants are provided with 1.2 Million images belonging to 1000 different classes. Several deep learning architectures have been proposed since then and we have considered two of them for our experiments. We have chosen the models based on their performance on Imagenet Challenge BID12 considering accuracy and computation as major metrics. Accuracy is one of the critical aspect in building deep learning models. It depends on network architecture and the amount of data available for training. Also most of the convolutional neural networks (ConvNets) have huge memory and computational requirements especially during training. This is an important concern specifically for our purpose as memory and space footprint is limited in embedded AI devices such as drones. Also size of the final trained model becomes important to consider as we will be deploying the models to run locally on drone. In general, more computationally intensive networks usually tends to produce more accuracy. Hence, there is always a trade-off between accuracy and computational cost. Apart from these, there are many other factors which are important in selecting models such as training time, ability of a network to generalize well, inference time etc. Considering all of these factors, we have chosen MobileNet BID7 Transfer learning is a Machine Learning technique to store knowledge gained on solving one problem and applying it to a different related problem. The three major transfer learning scenarios are using ConvNet as fixed feature extractor, fine-tuning the ConvNet and using pre-trained models. The two most important factors that help us to decide on what type of transfer learning we should perform on new dataset are size of the dataset and similarity to original dataset. Since our dataset is large enough and different from dataset, we have decided to fine-tune our models. Fine-tuning is a transfer learning strategy to replace the final layers of the ConvNets and tweak them so that they can learn more robust features relevant to our problem. We then retrain the classifier on the new dataset and fine-tune the weights of the pre-trained network by continuing the back propagation. Usually the earlier features of a ConvNet contain more generic features (like edge detectors, color detectors etc.) which are useful for many image detecting tasks. Later layers of ConvNets extract minute detailed features specific to the classes contained in our problem. We have fine-tuned both InceptionResnet and MobileNet by initializing them with their weights pretrained on ImageNet BID5 and retrain the model from scratch. In this section we discuss the of our experiments with comparisons in terms of size of the models, accuracy, inference time and power consumption. Table 2 compares the architectural complexity, training time and accuracy of TrailNet, InceptionResnet and MobileNet on our dataset. It is evident from our observations that both InceptionResnet and MobileNet have an accuracy better than TrailNet. Also the training time and computational cost involved for MobileNet is much less than that of other two models because of its less complexity, even though its accuracy is on par with InceptionResnet model. We have performed testing of our models by running them on Udacity simulator. We have used a dataset consisting of 2000 images pertaining to a trail path of approximately 100 meters and simulated the drone by the commands given by output of the algorithms. FIG5 shows the path traversed by the drone autonomously on the test data environment, where a) is the path traversed by drone manually controlled using simulator, b) is the path traversed when the simulator is controlled with output of Inception-Resnet and c) is the path traversed by drone when controlled with output of MobileNet. It can be seen that the drone is more robust and pretty much followed the ground truth path using both the models. We have measured the inference time (i.e., time taken by the model to predict output of test dataset) on Jetson TX2 machine and the following table shows the comparison. From our observations in TAB2, MobileNet has a very less inference time, which means the cameras with high frame rate can be operated using the MobileNet model for more time and with less energy consumption compared to other two models. We have calculated our power draw values on Nvidia Titan Xp GPU with a maximum possible power draw of 300W. Also the average power drawn with idle GPU is 15W. We can see that MobileNet draws very less power compared to other two models which in long battery life of the drone. We have taken a batch size of 32 for inference power draw calculation. Table 2, MobileNet has a very less inference time, which means the cameras with high frame rate can be operated using the MobileNet model for more time and with less energy consumption compared to other two models. Note that we have not trained TrailNet model from scratch and so we have not reported the peak power consumed in this case. While producing this work, we have encountered several challenges. Few of these challenges are listed below:1. The major challenge encountered was to run our DNN models on the physical drone in real time due to a hardware bug we were facing with the FCU.2. We could not train our models from scratch due to lack of significant amount of dataset. Additionally, we handled a lot of issues to make our models more stable and robust. Since in each trial, the number of images in each class (left, straight and right) were different, there was a lot of data imbalance which we solved by upsampling and downsampling the dataset.3. Due to the large number of training parameters at the beginning, our models were overfitted. We eliminated over-fitting by introducing several data augmentation techniques (random flipping, random rotation, random contrast and transition etc.). We further included regularization (especially dropout layers) in order to reduce network complexity.4. Power is one of the important factors especially in mobile embedded devices with small size and computational power. Typically, deep learning algorithms consume more power specifically for the real time inference. We have made an estimate of the power consumption of each of our model by calculating the GPU power drawn by them but we could not test how long our drone would run implementing each of these models due to the hardware bug mentioned before. In this paper, we have presented a comparison between 3 algorithms -TrailNet, InceptionResnet and MobileNet in terms of accuracy, computational cost, power consumption, inference time and robustness. The choice of algorithm for UAVs varies on the basis of several factors. In our work, we have worked with some of the factors which we thought would be pivotal in algorithm selection considering reasonable comparisons. We observed in our study that MobileNet outperformed others with very less computational requirement and power consumption. Hence in our opinion, MobileNet is more befitting for drones and other embedded devices compared to TrailNet and InceptionResnet. Safety is another major concern in terms of drones. There can be many situations such as collision with the objects, external disturbances like winds, chances of drone moving out of manual controller zone, battery issues, chances of getting stolen and other safety hazards. We will be implementing these drone related safety features in our future work.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
Syx9rnRcYm
case study on optimal deep learning model for UAVs
Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer , a sequence model based on self-attention, has achieved compelling in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance . This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art on the latter. A musical piece often consists of recurring elements at various levels, from motifs to phrases to sections such as verse-chorus. To generate a coherent piece, a model needs to reference elements that came before, sometimes in the distant past, and then repeat, vary, and further develop them to create contrast and surprise. Intuitively, self-attention could be a good match for this task. Self-attention over its own previous outputs allows an autoregressive model to access any part of the previously generated output at every step of generation. By contrast, recurrent neural networks have to learn to proactively store elements to be referenced in a fixed size state or memory, making training potentially much more difficult. We believe that repeating self-attention in multiple, successive layers of a Transformer decoder BID17 can help capture the multiple levels at which self-referential phenomena exist in music. In its original formulation, the Transformer relies on absolute position representations, using either positional sinusoids or learned position embeddings that are added to the per-position input representations. Recurrent and convolutional neural networks instead model position in relative terms: RNNs through their recurrence over the positions in their input, and CNNs by applying kernels that effectively choose which parameters to apply based on the relative position of the covered input representations. Music has multiple dimensions along which relative differences arguably matter more than their absolute values; the two most prominent are timing and pitch. To capture such pairwise relations between representations, BID13 introduce a relation-aware version of self-attention which they use successfully to modulate self-attention by the distance between two positions. We extend this approach to capture relative timing and optionally also pitch, which yields improvement in both sample quality and perplexity for the JSB Chorales dataset. As opposed to the original Transformer, samples from a Transformer with our relative attention mechanism maintain the regular timing grid present in this dataset. The model furthermore captures global timing, giving rise to regular phrases. The original formulation of relative attention BID13 requires O(L 2 D) memory where L is the sequence length and D is the dimension of the model's hidden state. This is prohibitive for long sequences such as those found in the Maestro dataset of human-performed virtuosic, classical piano music BID7. In Section 3.4, we show how to reduce the memory requirements to O(LD), making it practical to apply relative attention to long sequences. The Maestro dataset consists of MIDI recorded from performances of competition participants, bearing expressive dynamics and timing on a less than 10-millisecond granularity. Discretizing time in a fixed grid on such a resolution would yield unnecessarily long sequences as not all events change on the same timescale. We hence adopt a sparse, MIDI-like, event-based representation from , allowing a minute of music with a 10-millisecond resolution to be represented at lengths around 2K. This is in contrast to a 6K to 18K length that would be needed on a serialized multi-attribute fixed-grid representation. As position in sequence no longer corresponds to time, a priori it is not obvious that relative attention should work as well with such a representation. However, we will show in Section 4.2 that it does improve perplexity and sample quality over strong baselines. We speculate that idiomatic piano gestures such as scales, arpeggios and other motifs all exhibit a certain grammar and recur periodically, hence knowing their relative positional distances makes it easier to model this regularity. This inductive bias towards learning relational information, as opposed to patterns based on absolute position, suggests that the Transformer with relative attention could generalize beyond the lengths it was trained on, which our experiments in Section 4.2.1 confirm. We show the first successful use of Transformers in generating music that exhibits long-term structure. Before our work, LSTMs were used at timescales of 15s (~500 tokens) of piano performances . Our work demonstrates that Transformers not only achieve state-of-the-art perplexity on modeling these complex expressive piano performances, but can also generate them at the scale of minutes (thousands of tokens) with remarkable internal consistency. Our relative self-attention formulation is essential to the model's quality. In listening tests (see Section 4.2.3), samples from models with relative self-attention were perceived as more coherent than the baseline Transformer model BID17. Relative attention not only enables Transformers to generate continuations that elaborate on a given motif, but also to generalize and generate in consistent fashion beyond the length it was trained on (see Section 4.2.1). In a seq2seq setup, Transformers can generate accompaniments conditioned on melodies, enabling users to interact with the model. The space complexity of the relative self-attention mechanism in its original formulation BID13 made it infeasible to train on sequences of sufficient length to capture long-range structure in longer musical compositions. To address this, we present a crucial algorithmic improvement to the relative self-attention mechanism, dramatically reducing its memory requirements from O(L 2 D) to O(LD). For example, the memory consumption per layer is reduced from 8.5 GB to 4.2 MB (per head from 1.1 GB to 0.52 MB) for a sequence of length L = 2048 and hidden-state size D = 512 (per head D h = D H = 64, where number of heads is H = 8) (see TAB1), allowing us to use GPUs to train the relative self-attention Transformer on long sequences. Sequence models have been the canonical choice for modeling music, from Hidden Markov Models to RNNs and Long Short Term Memory networks (e.g., BID3 ;), to bidirectional LSTMs (e.g., BID6 . Successful application of sequential models to polyphonic music often requires serializing the musical score or performance into a single sequence, for example by interleaving different instruments or voices. Alternatively, a 2D pianoroll-like representation (see A.1 for more details) can be decomposed into a sequence of multi-hot pitch vectors, and their joint probability distributions can be captured using Restricted Boltzmann Machines BID14 BID8 or Neural Autoregressive Distribution Estimators (NADE; BID10 . Pianorolls are also image-like and can be modeled by CNNs trained either as generative adversarial networks (GAN; BID4 . (e.g., BID2 or as orderless NADEs BID15 BID19) (e.g., BID9 . use self-similarity in style-transfer fashion, where the self-similarity structure of a piece serves as a template objective for gradient descent to impose similar repetition structure on an input score. Self-attention can be seen as a generalization of self-similarity; the former maps the input through different projections to queries and keys, and the latter uses the same projection for both. Dot-product self-attention is the mechanism at the core of the Transformer, and several recent works have focused on applying and improving it for image generation, speech, and summarization BID12). A key challenge encountered by each of these efforts is scaling attention computationally to long sequences. This is because the time and space complexity of self-attention grows quadratically in the sequence length. For relative self-attention BID13 this is particularly problematic as the space complexity also grows linearly in the dimension, or depth, of the per-position representations. 3.1 DATA REPRESENTATION We take a language-modeling approach to training generative models for symbolic music. Hence we represent music as a sequence of discrete tokens, with the vocabulary determined by the dataset. Datasets in different genres call for different ways of serializing polyphonic music into a single stream and also discretizing time. The JSB Chorale dataset consists of four-part scored choral music, which can be represented as a matrix where rows correspond to voices and columns to time discretized to sixteenth notes. The matrix's entries are integers that denote which pitch is being played. This matrix can than be serialized in raster-scan fashion by first going down the rows and then moving right through the columns (see A.1 for more details). Compared to JSB Chorale, the piano performance data in the Maestro dataset includes expressive timing information at much finer granularity and more voices. For the Maestro dataset we therefore use the performance encoding proposed by which consists of a vocabulary of 128 NOTE_ON events, 128 NOTE_OFFs, 100 TIME_SHIFTs allowing for expressive timing at 10ms and 32 VELOCITY bins for expressive dynamics (see A.2 for more details). The Transformer decoder is a autoregressive generative model that uses primarily self-attention mechanisms, and learned or sinusoidal position information. Each layer consists of a self-attention sub-layer followed by a feedforward sub-layer. DISPLAYFORM0 DISPLAYFORM1 The attention outputs for each head are concatenated and linearly transformed to get Z, a L by D dimensional matrix. A upper triangular mask ensures that queries cannot attend to keys later in the sequence. For other details of the Transfomer model, such as residual connections and learning rates, the reader can refer BID17. The feedforward (FF) sub-layer then takes the output Z from the previous attention sub-layer, and performs two layers of point-wise dense layers on the depth D dimension, as shown in Equation 2. W 1, W 2, b 1, b 2 are weights and biases of those two layers. DISPLAYFORM2 3.3 RELATIVE POSITIONAL SELF-ATTENTION As the Transformer model relies solely on positional sinusoids to represent timing information, BID13 introduced relative position representations to allow attention to be informed by how far two positions are apart in a sequence. This involves learning a separate relative position embedding E r of shape (H, L, D h), which has an embedding for each possible pairwise distance r = j k − i q between a query and key in position i q and j k respectively. The embeddings are ordered from distance −L + 1 to 0, and are learned separately for each head. In BID13, the relative embeddings interact with queries and give rise to a S rel, an L × L dimensional logits matrix which modulates the attention probabilities for each head as: DISPLAYFORM3 We dropped head indices for clarity. Our work uses the same approach to infuse relative distance information in the attention computation, while significantly improving upon the memory footprint for computing S rel. For each head, BID13 DISPLAYFORM4, containing the embeddings that correspond to the relative distances between all keys and queries. Q is then reshaped to an (L, 1, D h) tensor, and S rel = QR. 2 This incurs a total space complexity of O(L 2 D), restricting its application to long sequences. We improve the implementation of relative attention by reducing its intermediate memory requirement DISPLAYFORM0, with example lengths shown in TAB1. We observe that all of the terms we need from QR are already available if we directly multiply Q with E r, the relative position embedding. After we compute QE r, its (i q, r) entry contains the dot product of the query in position i q with the embedding of relative distance r. However, each relative logit (i q, j k) in the matrix S rel from Equation 3 should be the dot product of the query in position i q and the embedding of the relative distance j k − i q, to match up with the indexing in QK. We therefore need to "skew" QE r so as to move the relative logits to their correct positions, hence S rel = Skew(QE r). The "skewing" procedure is illustrated in FIG0 and will be detailed in the next section. The time complexity for both methods are O(L 2 D), while in practice our method is 6x faster at length 650 as prior work still requires manipulating larger tensors. Hence, we propose a "skewing" procedure to transform an absolute-by-relative (i q, r) indexed matrix into an absolute-by-absolute (i q, j k) indexed matrix. The row indices i q stay the same while the columns indices are shifted according to the following equation: FIG0 the upper right green dot in position of QE r after skewing has a column index of 2 − (3 − 1) + 0 = 0, ing in a position of in S rel. DISPLAYFORM1 We outline the steps illustrated in FIG0 below.1. Pad a dummy column vector of length L before the leftmost column.2. Reshape the matrix to have shape (L+1, L). (This step assumes NumPy-style row-major ordering.)3. Slice that matrix to retain only the last l rows and all the columns, ing in a (L, L) matrix again, but now absolute-by-absolute indexed, which is the S rel that we need. For very long sequences, the quadratic memory requirement of even baseline Transformer is impractical. Local attention has been used for example in Wikipedia and image generation (; by chunking the input sequence into non-overlapping blocks. Each block then attends to itself and the one before, as shown by the smaller thumbnail on the top right corner of FIG2 .To extend relative attention to the local case, we first note that the right block has the same configuration as in the global case (see FIG0) but much smaller: DISPLAYFORM0 where M is the number of blocks, and N be the ing block length) as opposed to L 2. The left block is unmasked with relative indices running from -1 (top right) to -2N + 1 (bottom left). Hence, the learned E r for the local case has shape (2N − 1, N).Similar to the global case, we first compute QE r and then use the following procedure to skew it to have the same indexing as QK, as illustrated in FIG2 1. Pad a dummy column vector of length N after the rightmost column.2. Flatten the matrix and then pad with a dummy row of length N − 1. 4.1 J.S. BACH CHORALES J.S. Bach Chorales is a canonical dataset used for evaluating generative models for music 3 (e.g., BID0 BID1 ; BID5 BID9 . It consists of score-based four-part chorales. We first discretize the scores onto a 16th-note grid, and then serialize them by iterating through all the voices within a time step and then advancing time (see A.1 for more details). As there is a direct correspondence between position in sequence and position on the timing/instrument grid in a piece, adding relative position representations could make it easier to learn this grammar. We indeed see relative attention drastically improve negative log-likelihood (NLL) over baseline Transformer TAB3 ). This improvement is also reflected in sample quality. The samples now maintain the necessary timing/instrument grid, always advancing four steps before advancing in time. As local timing is maintained, the model is able to capture timing on a more global level, giving rise to regular phrasing, as shown in the right score of FIG1. In addition to relative attention, we explored enhancing absolute timing through concatenating instead of adding the sinusoids to the input embeddings. This allows the model to more directly learn its absolute positional mapping. This further improves performance for both the baseline and relative transformer TAB3. We compare against COCONET as it is one of the best-performing models that has also been evaluated on the 16-note grid using the canonical dataset split. To directly compare, we re-evaluated COCONET to obtain note-wise losses on the validation set 4. For the Transformer models (abbreviated as TF), we implemented our attention mechanisms in the Tensor2Tensor framework. We use 8 heads, and keep the query, key (att) and value hidden size (hs) fixed within a config. We tuned number of layers (L in {4,5,6}), attention hidden size (att in {256, 512}) and pointwise feedforward hidden size (ff in {512, 1024}). A musical event bears multiple attributes, such as timing, pitch, instrument etc. To capture more relational information, we extend relative attention to capture pairwise distances on additional attributes. We learn separate relative embeddings for timing E t and also pitch E p. E t has entries corresponding to how many sixteenth notes apart are two positions in time, while E p embeds the pairwise pitch interval. However this approach is not directly scalable beyond J.S. Bach Chorales because it involves explicitly gathering relative embeddings for R t and R p, ing in a memory complexity of O(L 2 D) as in BID13. This is due to relative information being computed based on content as opposed to content-invariant information such as position in sequence. It was sufficient to add these extra relational information to the first layer, perhaps because it is closest to the raw input content. Here, the relative logits are computed from three terms, We train on random crops of 2048-token sequences and employ two kinds of data augmentation: pitch transpositions uniformly sampled from {−3, −2, . . ., 2, 3} half-steps, and time stretches uniformly sampled from the set {0.95, 0.975, 1.0, 1.025, 1.05}. For evaluation, we segment each sequence sequentially into 2048 length subsequences and also keep the last subsequences that are of shorter lengths. This in 1128 and 1183 subsequences in the validation and test set respectively. Each subsequence is then evaluated by running the model forward once with teaching forcing. As the subsequences vary in length, the overall negative loglikelihood (NLL) is averaged entirely on the token level. We compare our to PerformanceRNN (LSTM, which first used this dataset) and LookBack RNN (LSTM with attention) BID19. LookBack RNN uses an input representation that requires monophonic music with barlines which is information that is not present 5 Maestro dataset: urlhttps://magenta.tensorflow.org/datasets/maestro 6 Piano-e-Competition: http://www.piano-e-competition.com/ 7 COCONET is an instance of OrderlessNADE, which approximates a mixture model over orderings where orderings are assumed to be uniformly distributed. Hence, its loss is computed by averaging losses over multiple random orderings. The current row in the table reports this loss. In contrast, the row above corresponds to evaluating Coconet as an autoregressive model under the chronological ordering. BID9 show that sample quality is better when using Gibbs sampling (which uses conditionals from multiple orderings) as opposed to autoregressive generation (which only uses conditionals from one ordering). DISPLAYFORM0 in performed polyphonic music data, hence we simply adopt their architecture. TAB4 shows that Transformer-based architectures fits this dataset better than LSTM-based models. We implemented our attention mechanisms in the Tensor2Tensor framework, and used the default hyperparameters for training, with 0.1 learning rate and early stopping. We compare four architectures, varying on two axes: global versus local, and regular versus relative attention. We found that reducing the query and key channel size (att) to three forth of the hidden size (hs) works well for this dataset and used this setup for all of the models, while tuning on number of layers (L) and dropout rate (d). We use block size (bs) 512 for local attention (; . For relative global attention, the maximum relative distance to consider is set to half the training sequence length. For relative local attention, it is set to the full memory length which is two blocks long. TAB4 shows that our memory-efficient relative attention formulations outperform regular attention in both the global and the local case. When looking at the other axes, we see global attention outperforming local attention in both the relative and regular case. Global attention may have the advantage of being able to directly look back for repeating motifs. With a larger dataset, local attention may fare well as it allows for much deeper models and longer sequences, as seen in text and image generation work . In turn, both domains could benefit from the use of relative local attention. When primed with an initial motif (Chopin's Étude Op. 10, No. 5) shown in the top left corner of FIG4, we see the models perform qualitatively differently. Transformer with relative attention elaborates the motif and creates phrases with clear contour which are repeated and varied. Baseline Transformer uses the motif in a more uniform fashion, while LSTM uses the motif initially but soon drifts off to other material. Note that the generated samples are twice as long as the training sequences. Relative attention was able to generalize to lengths longer than trained but baseline Transformer deteriorates beyond its training length. See Appendix C for visualizations of how our Relative Transformer attends to past motifs. To explore the sequence-to-sequence setup of Transformers, we experimented with a conditioned generation task where the encoder takes in a given melody and the decoder has to realize the entire performance, i.e. melody plus accompaniment. The melody is encoded as a sequence of tokens as in BID19, quantized to a 100ms grid, while the decoder uses the performance encoding described in Section 3.1 (and further illustrated in A.2). We use relative attention on the decoder side and show in TAB5 that it also improves performance. To compare the perceived sample quality of models trained on the Maestro dataset, and their ability to generate a continuation for a priming sequence, we carried out a listening test study comparing the baseline Transformer, our Transformer with relative-attention, PerformanceRNN (LSTM), and the validation set. Participants were presented with two musical excerpts (from two different models that were given the same priming sequence) and asked to rate which one is more musical on a Likert scale. For each model, we generated 10 samples each with a different prime, and compared them to three other models, ing in 60 pairwise comparisons. Each pair was rated by 3 different participants, yielding a total of 180 comparisons. Figure 5 shows the number of comparisons in which an excerpt from each model was selected as more musical. The improvement in sample quality from using relative attention over the baseline Transformer model was statistically significant (see Appendix B for the analysis), both in aggregate and between the pair. Even though in aggregate LSTMs performed better in the study than the Transformer, despite having higher perplexity, but when compared against each other head to head, the were not statistically significant (see TAB6 in Appendix B).(Ours) Figure 5: Samples from the Transformer with our efficient relative attention were rated more musical more times than LSTM and baseline Transformer. Error bars show standard deviation of the mean. In this work we demonstrated that the Transformer equipped with relative attention is very well-suited for generative modeling of symbolic music. The compelling long-term structure in the samples from our model leaves us enthusiastic about this direction of research. Moreover, the ability to expand upon a prime, in particular, suggests potential applications as creative tool. The significant improvement from relative attention highlights a shortcoming of the original Transformer that might also limit its performance in other domains. Improving the Transformer's ability to capture periodicity at various time scales, for instance, or relations between scalar features akin to pitch could improve time-series models. Our memory-efficient implementation enables the application of relative attention to much longer sequences such as long texts or even audio waveforms, which significantly broadens the range of problems to which it could be applied. Adapting sequence models for music requires making decisions on how to serialize a polyphonic texture. The data type, whether score or performance, makes certain representations more natural for encoding all the information needed while still ing in reasonable sequence lengths. A.1 SERIALIZED INSTRUMENT/TIME GRID (USED FOR THE J.S.BACH CHORALES DATASET) The first dataset, J.S. Bach Chorales, consists of four-part score-based choral music. The time resolution is sixteenth notes, making it possible to use a serialized grid-like representation. Figure 6 shows how a pianoroll (left) can be represented as a grid (right), following BID9. The rows show the MIDI pitch number of each of the four voices, from top to bottom being soprano (S), alto (A), tenor (T) and bass (B), while the columns is discretized time, advancing in sixteenth notes. Here longer notes such as quarter notes are broken down into multiple repetitions. To serialize the grid into a sequence, we interleave the parts by first iterating through all the voices at time step 1, and then move to the next column, and then iterate again from top to bottom, and so on. The ing sequence is S 1 A 1 T 1 B 1 S 2 A 2 T 2 B 2..., where the subscript gives the time step. After serialization, the most common sequence length is 1024. Each token is represented as onehot in pitch. S: 67, 67, 67, 67 A: 62, 62, 62, 62 T: 59, 59, 57, 57 B: 43, 43, 45, 45 Figure 6: The opening measure of BWV 428 is visualized as a pianoroll (left, where the x-axis is discretized time and y-axis is MIDI pitch number), and encoded in grid representation with sixteenth note resolution (right). The soprano and alto voices have quarter notes at pitches G4 and D4, the tenor has eighth notes at pitches B3 and A3, and the bass has eighth notes at pitches A2 and G2. The second dataset, Maestro BID7, consists of polyphonic piano performances with expressive timing and dynamics. The time resolution here is on the millisecond level, so a grid representation would in sequences that are too long. Instead, the polyphonic performance is serialized into a sequence of one hot encoded events as proposed in.First, the input MIDI files are preprocessed to extend note durations based on sustain pedal control events. The sustain pedal is considered to be down whenever a sustain control change is encountered with a value >= 64; the sustain pedal is then considered up after a control change with a value < 64. Within a period where the sustain pedal is down, the duration of each note is extended to either the beginning of the next note of the same pitch or the end of the sustain period, whichever happens first. If the original duration extends beyond the time when the sustain pedal is down, that original duration is used. Next, the MIDI note events are converted into a sequence from the following set of vocabulary: 128 NOTE_ON events for starting a note of with one of the 128 MIDI pitches, 128 NOTE_OFF events for ending a note with one of the 128 MIDI pitches, 100 TIME_SHIFT events representing forward time shifts in 10ms increments from 10ms to 1s, and 32 SET_VELOCITY events representing the velocity for future NOTE_ON events in the form of the 128 possible MIDI velocities quantized into 32 bins. An example performance encoding is illustrated in Figure 7. TAB1 TIME_SHIFT<500>, NOTE_OFF<65> Figure 7: A snippet of a piano performance visualized as a pianoroll (left) and encoded as performance events (right, serialized from left to right and then down the rows). A C Major chord is arpeggiated with the sustain pedal active. At the 2-second mark, the pedal is released, ending all of the notes. At the 3-second mark, an F is played for.5 seconds. The C chord is played at velocity 80 and the F is played at velocity 100. Participants were presented with two musical excerpts that shared a common priming sequence. For each excerpt, the priming sequence was played, followed by 2.5 seconds of silence, followed by the priming sequence again and a continuation of that sequence. The continuations were either sampled from one of the models or extracted from our validation set. We evaluated all possible pairs in the space of data and model samples, except from the same model. Each continuation had a length of 512 events using the encoding described in Section A.2. This corresponds to the length the models were trained on to remove the deteriorating effect that happens with baseline Transformer when asked to generate beyond the length it was trained on. Participants were asked which excerpt they thought was more musical on a Likert scale of 1 to 5. The pair is laid out left versus right, with 1 indicating the left is much more musical, 2 the left is slightly more musical, 3 being a tie, 4 being the right is slightly more musical, and 5 the right is much more musical. For each model, we generated 10 samples each with a different prime, and compared them to three other models, ing in 60 pairwise comparisons. Each pair was rated by 3 different participants, yielding a total of 180 comparisons. A Kruskal-Wallis H test of the ratings showed that there was a statistically significant difference between the models: χ 2 = 63.84, p = 8.86e-14< 0.01. TAB6 show a post-hoc analysis on the comparisons within each pair, using the Wilcoxon signed-rank test for matched samples. TAB7 shows a post-hoc analysis of how well each model performed when compared to all pairs, and compares each model's aggregate against each other, using the Mann-Whitney U test for independent samples. We use a Bonferroni correction on both to correct for multiple comparisons. The win and loss counts bucket scores 4, 5 and scores 1, 2 respectively, while the tieing score is 3.Both within pairs and between aggregates, participants rated samples from our relative Transformer as more musical than the baseline Transformer with p < 0.01/6.For within pairs, we did not observe a consistent statistically significant difference between the other model pairs, baseline transformer versus LSTM and LSTM versus relative Transformer. When comparing between aggregates, LSTM was overall perceived as more musical than baseline Transformer. Relative Transformer came a bit close to outperforming LSTM with p = 0.018. When we listen to the samples from the two, they do sound qualitatively different. Relative Transformer often exhibits much more structure (as shown in FIG4), but the effects were probably less pronounced in the listening test because we used samples around 10s to 15s, which is half the length of those shown in FIG4 to prevent the baseline Transformer from deteriorating. This weakens the comparison on long-term structure. When compared to real music from the validation set, we see that in aggregates, real music was better than LSTM and baseline Transformer. There was no statistical significant difference between real music and relative Transformer. This is probably again due to the samples being too short as real music is definitely still better. One advantage of attention-based models is that we can visualize its attention distribution 3. This gives us a glimpse of how the model might be building up recurring structures and how far it is attending back. The pianorolls in the visualizations below is a sample generated from Transformer with relative attention. Each figure shows a query (the source of all the attention lines) and previous memories being attended to (the notes that are receiving more softmax probabiliy is highlighted in). The coloring of the attention lines correspond to different heads and the width to the weight of the softmax probability. Figure 8: This piece has a recurring triangular contour. The query is at one of the latter peaks and it attends to all of the previous high notes on the peak, all the way to beginning of the piece. Steps 2,3:Figure 10: Relative global attention: Steps (from left to right) for "skewing" an absolute-by-relative (i q, r) indexed matrix into absolute-by-absolute (i q, j k). Grey indicates self-attention masks or entries introduced by the skewing procedure. Positions with relative distance zero are marked. Entries outlined by purple are removed in step 3. Steps 1, 2Steps 3 Steps 4Figure 11: Relative local attention: Steps (from left to right) for "skewing" an (i q, r) indexed matrix with 2N − 1 ranged relative indices r into (i q, j k indexed. Shapes are indicated above the boxes, while indices in the boxes give relative distances.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJe4ShAcF7
We show the first successful use of Transformer in generating music that exhibits long-term structure.
Sequential decision problems for real-world applications often need to be solved in real-time, requiring algorithms to perform well with a restricted computational budget. Width-based lookaheads have shown state-of-the-art performance in classical planning problems as well as over the Atari games with tight budgets. In this work we investigate width-based lookaheads over Stochastic Shortest paths (SSP). We analyse why width-based algorithms perform poorly over SSP problems, and overcome these pitfalls proposing a method to estimate costs-to-go. We formalize width-based lookaheads as an instance of the rollout algorithm, give a definition of width for SSP problems and explain its sample complexity. Our experimental over a variety of SSP benchmarks show the algorithm to outperform other state-of-the-art rollout algorithms such as UCT and RTDP. Model-based lookahead algorithms provide the ability to autonomously solve a large variety of sequential decision making problems. Lookaheads search for solutions by considering sequences of actions that can be made from the current state up to a certain time into the future. For realworld applications decisions often need to be computed in real-time, requiring algorithms to perform with a restricted computational budget. Limiting search in this way can in considering states and trajectories which do not provide useful information. To address this, lookaheads can be augmented with heuristics that estimate costs-to-go to prioritise states and trajectories, and have been shown to perform well where computation budgets are restricted BID8.This paper is concerned with Stochastic Shortest Path (SSP) problems which are often used to compare and evaluate search algorithms. We consider the width-based family of planning algorithms, first introduced by BID15, which aim to prioritise the exploration of novel areas of the state space. Two width-based planners, Lipovetzky and Geffner's breadth-first search, IW, and the depth-first search, Rollout-IW BID1, are investigated on SSP problems. We first provide the necessary for SSP problems and width-based algorithms, while also formalising width-based algorithms as instances of the rollout algorithm BID4. We then show the motive to augment width-based lookaheads with cost estimates on SSP problems, define the width of SSP problems and propose a novel width-based algorithm that estimates costs-to-go by simulating a general base policy. Our experimental study shows that the algorithm compares favourably to the original Rollout-IW algorithm and to other state-of-the-art instances of the rollout algorithm. We concern ourselves with the problem of decision under stochastic uncertainty over a finite number of stages, which we characterise following closely the presentation of BID4. We are given a discrete-time dynamic system x k+1 = f k (x k, u k, w k), k = 0, 1,..., N − 1 where the state x k is an element of a space S k ⊂ R d, the control u k is an element of space C k ⊂ N, and the random disturbance w k is an element of a space D k ⊂ R m 1. The control u k is constrained to take values in a given non-empty subset U (x k) ⊂ C k, which depends on the current state x k, so that u k ∈ U k (x k) for all x k ∈ S k and k. The random disturbance w k is characterised by a probability distribution P k (·|x k, u k) that may depend explicitly on x k and u k but not on the values of previous disturbances w k−1,..., w 0. We consider the class of policies, or control laws, corresponding to the sequence of functions DISPLAYFORM0 where µ k maps states x k into controls u k = µ k (x k) and is such that µ k (x k) ∈ U (x k) for all x k ∈ S k. Such policies will be called admissible. Given an initial state x 0 and admissible policy π, the states x k and disturbances w k are random variables with distributions defined through the system equation DISPLAYFORM1 Thus, for given functions g f (terminal cost) and g the expected cost of π starting at x 0 is DISPLAYFORM2 where the expectation is taken over the random variables w k and x k. An optimal policy π * is one that minimises this cost DISPLAYFORM3 where Π is the set of all admissible policies. The optimal cost J * (x 0) depends on x 0 and is equal to J π * (x 0). We will refer to J * as the optimal cost or optimal value function that assigns to each initial state x 0 the cost J * (x 0). We use BID4 definition, that formulates Stochastic Shortest Path (SSP) problems as the class of optimal control problems where we try to minimize DISPLAYFORM0 with α set to 1 and we assume there is a cost-free termination state t which ensures that J π (x 0) is finite. Once the system reaches that state, it remains there at no further cost, that is, f (t, u, w) = t with probability 1 and g(t, u, w) = 0 for all u ∈ U (t). We note that the optimal control problem defined at the beginning of this section is a special case where states are pairs (x k, k) and all pairs (x N, N) are lumped into termination state t. In order to guarantee termination with probability 1, we will assume that there exists an integer m such that there is a positive probability that t will be reached in m stages or less, regardless of what π is being used and the initial state x 0. That is, for all admissible policies and i = 1,..., m it holds DISPLAYFORM1 A policy π will be proper if the condition above is satisfied for some m, and improper otherwise. A particularly effective on-line approach to obtain suboptimal controls is rollout, where the optimal cost-to-go from current state x k is approximated by the cost of some suboptimal policy and a d-step lookahead strategy. The seminal RTDP BID2 algorithm, is an instance of the rollout strategy where the lookahead is uniform, d = 1, and controlsμ(x k) selected at stage k and for state x k are those that attain the minimum DISPLAYFORM0 whereJ k+1 is an approximation on the optimal cost-to-go J * k+1. If the approximation is from below, we will refer to it as a base heuristic, and can either be problem specific BID8, domain independent BID5 BID22 or learnt from interacting with a simulator BID18. Alternatively, J k+1 can be defined as approximating the cost-to-go of a given suboptimal policy π, referred to as a base policy, where estimates are obtained via simulation BID19. We will denote the ing estimate of cost-to-go as H k (x k) 2. The of combining the lookahead strategy and the base policy or heuristic is the rollout policy,π {μ 0, µ 1, . . .,μ N −1} with associated costJ(x k). Such policies have the property that for all x k and k DISPLAYFORM1 when H k is approximating from above the cost-to-go of a policy, as shown by BID4 from the DP algorithm that defines the costs of both the base and the rollout policy. To compute at time k the rollout controlμ(x k), we compute and minimize over the values of the Q-factors of state and control pairs (x l, u l), DISPLAYFORM2 for admissible controls u l ∈ U (x l), l = k + i, with i = 0,..., d − 1, and DISPLAYFORM3 for l = k +d. In this paper we make a number of assumptions to ensure the viability of lookaheads with d > 1. We will assume that we can simulate the system in Eq. 3 under the base policy, so we can generate sample system trajectories and corresponding costs consistent with probabilistic data of the problem. We further assume that we can reset the simulator of the system to an arbitrary state. Performing the simulation and calculating the rollout control still needs to be possible within the real-time constraints of the application, which is challenging as the number of Q-factors to estimate and minimizations to perform in Equations 9-10 is exponential on the average number of controls available per stage and d, the maximum depth of the lookahead. We avoid the blowup of the size of the lookahead by cutting the recursion in Equation 9 and replacing the right hand side by that of Equation 10. As detailed in the next section, we will do this when reaching states x l that are deemed not to be novel according to the notion of structural width by BID15. This in a selective strategy alternative to the upper confidence bounds BID0 used in popular instances of Monte-Carlo Tree Search (MCTS) algorithms like UCT, that also are instances of the rollout algorithm BID4 ). We instantiate the rollout algorithm with an l-step, depth-selective lookahead policy using Width-based Search BID15. These algorithms both focus the lookahead and have good any-time behaviour. When it comes to prioritisation of expanding states, width-based methods select first states with novel valuations of features defined over the states BID16 BID9 DISPLAYFORM0 is the domain of variable x i, has not been seen before in the current search. Note that novel states are independent of the objective function used, as the estimated cost-to-go J is not used to define the novelty of the states. IW has recently been integrated as an instance of a rollout algorithm, and has been shown to perform well with respect to learning approaches with almost real-time computation budgets over the Atari games BID1. The breadth-first search strategy underlying IW ensures a state variable x i is seen for the first time through the shortest sequence of control steps, i.e. the shortest path assuming uniform costs g(x, u, w).4 On the other hand, depth-first rollout algorithms cannot guarantee this property in general. Rollout IW (RIW) changes the underlying search of IW into a depth-first rollout. In order to ensure that RIW considers a state to be novel iff it reaches at least one value of a state variable x i l through a shortest path, we need to adapt the definition of novelty. Intuitively, we need to define a set of state features to emulate the property of the breadth-first search strategy. Let d(x i, v) be the best upper bound known so far on the shortest path to reach each value v ∈ D(x i) of a state variable from the root state x k. Initially d(x i, v) = N for all state variables, where N is the horizon which is the maximum search depth allowed for the lookahead, thus denoting no knowledge initially. When a state x l is generated, d(x i, v) is set to l for all state variables where l < d(x i, v). Since RIW always starts each new rollout from the current state x k, in order to prove a state x l to be novel we have to distinguish between x l being already in the lookahead tree and x l being new. If x l is new in the tree, to conclude it is novel, it is sufficient to show that there exists a state variable x i whose known shortest path value d(x i, v) > l. If x l is already in the tree, we have to prove the state contains at least one state variable value x i whose shortest path is l = d(x i, v), i.e. state x l is still novel and on the shortest path to x i. Otherwise the rollout is terminated. In order to ensure the termination of RIW, non-novel states are marked with a solved label. The label is backpropagated from a state x l+1 to x l if all the admissible control inputs u ∈ U (x l) yield states x l+1 = f l (x l, u, w l) already labeled as solved. RIW terminates once the root state is labeled as solved BID1. Nonnovel states x l are treated as terminals and their cost-to-go is set to 0. This can induce a bias towards non-novel states rather than true terminal states. In the next section we investigate how to overcome the ill-behaviour of RIW when a state x l is non-novel. We discuss the importance of estimating upper-bounds on the cost-to-go H l (x l) instead of assigning termination costs. This turns out to be essential for RIW over SSPs. Despite the successes of width-based algorithms on a variety of domains including the Atari-2600 games BID16 BID1, the algorithms, as will be shown, have poor performance on SSP problems. We illustrate this with two scenarios. First, width-based lookaheads prefer trajectories leading to non-novel states over longer ones that reach a goal. Second, and specific to depth-first width-based lookaheads, we show that useful information is ignored. We can demonstrate these scenarios using a simple SSP problem with uniform and unitary action costs, shown in FIG0. The task is to navigate to a goal location using the least number of left, right, up or down actions. Any action that would in the agent moving outside of the grid produces no change in its position. The features used by the width-based planners are the coordinates for the current agent position. Both IW and RIW algorithms, given a sufficient budget, would in the lookahead represented by yellow lines in FIG0. As expected, both lookaheads contain the shortest paths to make each feature of the problem true. For both IW and RIW, we back up the costs found in the lookahead starting from terminal and non-novel states. In this instance a move down or left from the agent's initial state has no effect, thus immediately producing a non-novel state. When backing up values, down and left have an expected cost of 1, which is less than the optimal cost of 2 for up, the action that leads to the top left goal state. This prevents both IW and RIW from ever achieving the goal, as they keep selecting those useless actions. Furthermore, if the goal is the top right location in FIG0, RIW's random action selection can generate a trajectory that reaches the goal. Yet, trajectories leading to the goal are pruned away, as non-novel states in later considered trajectories are treated as terminals, again ing in the lookahead represented by the yellow lines in FIG0. BID1 introduced the algorithm RIW in the context of deterministic transition functions. In this section we discuss its properties in the context of SSPs. The set of features used to evaluate the novelty of a state is DISPLAYFORM0 is the domain of variable x i, and d is a possible shortest path distance. Note that the horizon N is the upper-bound of d. The maximum number of novel states is O(|F |), as the maximum number of shortest paths for a feature (v, i, ·) ∈ F is N. That is, in the worst case we can improve the shortest path for (v, i, ·) by one control input at a time. The labeling of nodes ensures the number of rollouts from the initial state in RIW is at most O(|F | × b), where b = max x l |U (x l)| is the maximum number of applicable control variables in a state, i.e. maximum branching factor. When the labeling is applied to stochastic shortest path problems, the ing lookahead tree is a relaxation of the original SSP, as it allows just one possible outcome of a control input. Alternatively, one can back-propagate the label solved to a state x l iff 1) all admissible control inputs u ∈ U (x l) have been applied ing in states labeled as solved, and 2) the tree contains all the possible ing states of each control input u ∈ U (x l). We refer to this new strategy to backpropagate labels as λ-labeling. We denote as λ the maximum number of states that can from applying u ∈ U (x l−1) in a state x l. That is, λ = max x,u,w |f (x, u, w)|. RIW with λ-labeling will terminate after at most DISPLAYFORM1 Furthermore, we can reconcile the notion of width over classical planning problems BID15 with SSPs. A terminal state t made of features f ∈ F has width 1 iff there is a trajectory x 0, u 0,..., u n−1, x n for n ≤ N where x n = t, such that for each x j in the trajectory 1) the prefix x 0, u 0,..., u j−1, x j reaches at least one feature DISPLAYFORM2, it is a shortest path possible to reach a value in x i, 2) any shortest path to f j can be extended with a single control input u into a shortest path for a feature f j+1 complying with property 1) in state x j+1, and 3) the shortest path for f n is also a shortest path for termination state t. RIW with the new labeling strategy is guaranteed to reach all width 1 terminal states t. Theorem 1. Rollout IW with λ-labeling is guaranteed to reach every width 1 terminal state t in polynomial time in the number of features F if λ = 1. If λ = ∞, RIW will not propagate any solved label, and terminate only when the computational budget is exhausted. For simplicity, we assumed shortest paths are equivalent to the shortest sequence of control inputs. To generalize to positive non-uniform costs, the distance d in the features should keep track of the cost of a path i g(x i, u i, w i) instead of its length, and the horizon be applied to the cost of the path. The most successful methods for obtaining cost-to-go approximations have revolved around the idea of running a number of Monte Carlo simulations of a suboptimal base policy π BID10 BID7. This amounts to generating a given number of samples for the expression minimized in Equation 7 starting from the states x l over the set of admissible controls u l ∈ U (x l) in Equation 10, averaging the observed costs. Three main drawbacks BID4 follow from this strategy. First, the costs associated with the generated trajectories may be wildly overestimating J * (x l) yet such trajectories can be very rare events for the given policy. Second, some of the controls u l may be clearly dominated by the rest, not warranting the same level of sampling effort. Third, initially promising controls may turn out to be quite bad later on. MCTS algorithms aim at combining lookaheads with stochastic simulations of policies and aim at trading off computational economy with a small risk of degrading performance. We add two new methods to the MCTS family, by combining the multi-step, width-based lookahead strategy discussed in the previous section with two simulation-based cost-to-go approximations subject to a computational budget that limits the number of states visited by both the lookahead and base policy simulation. The first method, which we call RIW-RW, uses as the base policy a random walk, a stochastic policy that assigns the same probability to each of the controls u admissible for state x, and is generally regarded as the default choice when no domain knowledge is readily available. A rolling horizon H is set for the rollout algorithm that follows from combining the RIW lookahead with the simulation of random walks. The maximal length of the latter is set to H − l, where l is the depth of the non-novel state. Both simulations and the unrolling of the lookahead are interrupted if the computational budget is exhausted. While this can in trajectories sometimes falling short from a terminal, it keeps a lid on the possibility of obtaining extremely long trajectories that eat into the computational budget allowed and preclude from further extending the lookahead or sampling other potentially more useful leaf states x l. One of the most striking properties of rollout algorithms is the cost improvement property in Equation 8, suggesting that upper bounds on costs-to-go can be used effectively to approximate the optimal costs J *. Inspired by this, the second width-based MCTS method we discuss leverages the sampling techniques known as stochastic enumeration (SE) BID19 to obtain an unbiased estimator for upper bounds on costs-to-go, or in other words, estimates the maximal costs a stochastic rollout algorithm with a large depth lookahead can incur. SE methods are inspired by a classic algorithm by D. E. Knuth to estimate the maximum search effort by backtracking search. Knuth's algorithm estimates the total cost of a tree T with root u keeping track of two quantities, C the estimate of the total cost, and D the expectation on the number of nodes in T at any given level of the tree, and the number of terminal nodes once the algorithm terminates. Starting with the root vertex u and D ← 1, the algorithm proceeds by updating D to be D ← |S(u)|D and choosing randomly a vertex v from the set of successors S(u) of u. The estimate C is then updated C ← C + c(u, v)D using the cost of the edge between vertices u and v. These steps are then iterated until a vertex v is selected s.t. S(v) = 0. We observe that Knuth's C quantity would correspond to the worst-case cost-to-goJ(x) k of a rollout algorithm using a lookahead strategy with d set to the rolling horizon H and the trivial base heuristic that assigns 0 to every leaf state. Furthermore, we assume that the algorithm either does not find any terminals within the limits imposed by the computational budget assigned, or if it finds one such state, it is too the very last one being visited. Lookaheads define trees over states connected by controls, edge costs c(u, v) correspond directly with realisations of the random variable g(x, u, w) and the set of successors S(v) of a vertex corresponds with the set of admissible controls U (x). While Knuth's algorithm estimates are an unbiased estimator, the variance of this estimator can be exponential on the horizon, as the accuracy of the estimator lies on the assumption that costs are evenly distributed throughout the tree BID19. In the experiments discussed next, we use Knuth's algorithm directly to provide H k (x k), adding the stopping conditions to enforce the computational budget and limiting the length of trajectories to H − l as above. In comparison with simulating the random walk policy, the only overhead incurred is keeping up-to-date quantities C and D with two multiplications and an addition. To evaluate the different methods we use a number of GridWorld (Sutton and Barto 2018) domains, an instance of a SSP problem. The goal in GridWorld is to move from an initial position in a grid to a goal position. In each state 4 actions are available: to move up, down, left or right. Any action that causes a move outside of the grid in no change to the agent's position. Actions have a cost of 1, with the exception of actions that in reaching the goal state, that have a cost of 0. The complexity of GridWorld can be scaled through the size of the grid and the location and number of goals. GridWorld also allows for extensions, which we use to have domains with a stationary goal, moving goals, obstacles and partial observability. For each instance of the GridWorld domain we have a d 0 × d 1 grid, and the state is the current location of the agent, x = (a 0, a 1) where a i is the agent's position in dimension i. The transition function is formalised as DISPLAYFORM0 where, ef specifies the change in the agent's position for each action, T k ⊂ S k is the set of goal states and x k+1 = x k where the condition in FIG0 is not met. The cost of a transition is defined as DISPLAYFORM1 For the stationary goal setting we have a single goal state which is positioned in the middle of the grid by dividing 223.2 ± 10.4 199.9 ± 13.6 145.5 ± 12.9 DISPLAYFORM2 where δ t k gives the relative change of the goal state t k for the time step k + 1 and DISPLAYFORM3 Resulting in two goals starting at opposite corners of the grid moving back and forth on the same diagonal. The obstacles setting, uses the stationary goal, but modifies S k such that, DISPLAYFORM4 where O ⊂ N 2 and is the set of obstacles, that is grid cells in which the agent is not allowed. Having partially observable obstacles in GridWorld provides an instance of the stochastic Canadian Traveller Problem (CTP) (Papadimitriou and Yannakakis 1991). The objective in CTP is to find the shortest path between one location in a road map to another, however, there is a known probability for each road in the map that due to weather conditions the road is blocked. A road in CTP can only be observed as being blocked or unblocked by visiting a location connected to it, and once a road status is observed the status remains unchanged. In terms of the GridWorld problem, each grid cell has a known probability of being a member of the obstacle set, O. The agent can only observe cells as being obstacles or not when it is in a neighbouring cell. Once a grid cell is observed it is then known that it is either an obstacle or not for the remaining duration of the episode. John Langford designed two MDP problems 5 described as Reinforcement Learning (RL) "Acid" intended to be RIW NA 212.9 ± 11.3 215.9 ± 10.8 223.5 ± 9.8 Rnd.236.2 ± 6.5 200.4 ± 11.4 110.6 ± 11.8 Table 2: Same experimental setting as TAB2 over GridWorld with a moving goal.difficult to solve using common RL algorithms, such as Qlearning. Langford's two problems allow two actions from every state. The state space for the problems is S k = {i | 0 ≤ i < N} where the number of states value, N, allows the complexity of the problem to be controlled. Langford originally specified the problems as reward-based, here we modify them to be SSP cost-based problems. Reward shaping is commonly used to make Reinforcement Learning easier by encouraging actions, through higher rewards, towards a goal state or states. The first of Langford's problems is named Antishaping and uses reward shaping to encourage actions away from the goal state. Antishaping has the transition function DISPLAYFORM5 otherwise, the state remains unchanged, x k+1 = x k. The set containing the goal state is T k = {N − 1}, which can be achieved by continuously selecting u k = 0. The cost of each transition in Antishaping is 0.25 divided by N -x k+1, except when x k+1 = N − 1 where the cost is 0. The problem becomes a large plateau where longer paths become more costly at larger rates. The motivation behind Langford's second problem, Combolock, is if many actions lead back to a start state, random exploration is inadequate. The Combolock problem has the transition function DISPLAYFORM6 otherwise x k+1 is equal to the initial position of 0. The goal state is T k = {N − 1} and sol x k is either 0 or 1 assigned to state x k which remains constant. For each state x ∈ S, sol x has an equal chance of either being 0 or 1. The cost of each transition in Combolock is 1 except for the transition that leads to the terminal state N − 1 where the cost is 0. While common Reinforcement Learning algorithms such as Q-Learning methods will struggle to solve these domains, it is claimed by Langford that the E 3 (Kearns and Singh 2002) family of algorithms, whose exploration do not rely solely on random policies or reward feedback but on exploring the maximum number of states, will perform well. 43.9 ± 1.9 35.9 ± 2.5 25.3 ± 2.4 UCT Man. 37.0 ± 2.9 36.4 ± 2.9 36.4 ± 2.9 Rnd.43.8 ± 1.9 38.5 ± 2.5 25.9 ± 1.9 RIW Man. 36.4 ± 2.9 36.4 ± 2.9 36.4 ± 2.9 NA 49.8 ± 0.4 48.8 ± 1.0 49.3 ± 0.8 Rnd.44.9 ± 1.7 34.5 ± 2.8 19.3 ± 2.1 20 1Stp Man. 76.7 ± 5.5 77.1 ± 5.4 76.7 ± 5.5 Rnd.97.9 ± 1.6 88.0 ± 3.4 62.7 ± 4.8 UCT Man. 78.0 ± 5.4 78.4 ± 5.3 73.4 ± 5.7 Rnd.98.7 ± 1.2 96.4 ± 1.9 77.2 ± 4.2 RIW Man. 79.7 ± 4.9 76.7 ± 5.5 76.7 ± 5.5 NA 100.0 ± 0.0 100.0 ± 0.0 99.6 ± 0.8 Rnd.98.5 ± 1.3 88.0 ± 3.4 29.3 ± 3.1 Man.194.6 ± 13.4 191.5 ± 13.6 196.6 ± 13.2 Rnd.249.2 ± 1.1 244.4 ± 3.7 216.8 ± 9.3 UCT Man.194.6 ± 13.4 195.6 ± 13.3 184.4 ± 13.9 Rnd.249.0 ± 1.9 243.1 ± 4.3 231.6 ± 7.9 RIW Man. 208.6 ± 10.7 210.6 ± 11.1 193.5 ± 13.4 NA 250.0 ± 0.0 250.0 ± 0.0 250.0 ± 0.0 Rnd.247.9 ± 2.6 242.9 ± 4.3 196.1 ± 11.3 Table 3: Same settings as TAB2 over GridWorld with fully observable obstacles and a stationary goal. We evaluate the depth-first width-based rollout algorithm, RIW, with and without being augmented using base policies. λ = 1 is used for the labels back-propagation. We did not observe statistically significant changes with λ = ∞. For the GridWorld domain we define the features on which RIW plans over as F = {(a, i, d) | a ∈ D(x i)} where d is the length of the control input path from the initial state, a is the agent's position in the grid in dimension i and D(x i) is the domain of the agent's position, a, in dimension i. For Antishaping and Combolock the feature set will be F = {(i, d) | i ∈ N )} where i is the state number the agent is in and N is the number of states of the domain. Two additional rollout algorithms are also considered, the one-step rollout algorithm, RTDP BID2 and the multi-step, selective, regret minimisation, rollout algorithm Upper Confidence bounds applied to Trees (UCT) BID14. The exploration parameter of UCT is set to 1.0 for all experiments. For all the methods that use a base policies the maximum depth of a simulated trajectory is equal to H − l, where l is the depth at which the simulated trajectory began and H is the horizon value of the lookahead. Also, a single, as opposed to multiple, simulated trajectory for the cost-to-go approximation is used, as initial indicated it is favourable. We also report the algorithms using a Manhattan distance heuristic for the GridWorld domains that use obstacles. Using the Manhattan distance for the GridWorld problems with obstacles provides a lower bound on the cost-to-go. Each method on the domains is evaluated at different levels of complexity by varying the number of states. The methods are evaluated using different simulator budgets. The simulator budgets are the maximum simulator calls allowed for the evaluation at each time step. For each algorithm and domain Dim. Heu. Simulator Budget 100 1000 10000 10 1Stp Man. 36.6 ± 2.9 35.0 ± 3.0 35.9 ± 2.9 Rnd.40.4 ± 2.1 27.2 ± 2.5 15.4 ± 1.6 UCT Man. 28.4 ± 2.9 29.5 ± 3.1 18.5 ± 2.7 Rnd.41.5 ± 2.0 36.1 ± 2.3 22.2 ± 2.0 RIW Man. 28.1 ± 3.0 28.3 ± 3.0 26.5 ± 3.0 NA 49.5 ± 0.7 49.1 ± 0.9 49.8 ± 0.4 Rnd.43.5 ± 1.9 23.4 ± 2.6 11.1 ± 1.3 Man. 71.8 ± 5.8 74.0 ± 5.7 71.9 ± 5.8 Rnd.97.0 ± 1.7 82.3 ± 3.9 49.8 ± 4.6 UCT Man. 53.8 ± 5.7 60.9 ± 6.2 38.0 ± 5.5 Rnd.98.0 ± 1.3 87.4 ± 4.0 63.0 ± 4.8 RIW Man. 53.5 ± 5.6 44.1 ± 5.5 44.1 ± 5.6 NA 100.0 ± 0.0 99.5 ± 1.0 99.5 ± 1.0 Rnd.97.6 ± 1.5 79.7 ± 4.4 20.7 ± 1.9 Results are also presented for the Atari-2600 game Skiing, which is a SSP problem. We use the OpenAI gym's BID6 interface of the Arcade Learning Environment (ALE) BID3 ) and use the slalom game mode of Skiing. In the slalom mode the aim is to ski down the course as fast as possible while going through all the gates. Once the finish line is reached, a 5 second time penalty is applied for each gate that is missed. The reward values provided by ALE for Skiing is the negative value of the total time taken plus any time penalties in centiseconds, which we use as a cost. We use the environment settings as described by BID17 with a frame skip of 5 and a probability of 0.25 of repeating the previous action sent to environment instead of the current one, which Machado et al. call sticky actions. For evaluation we use a simulator budget of 100 and partial caching as described by BID1, in that we cache simulator state-action transitions, thus assuming determinism, but clear the cached transitions when executing an action in the environment. However, the lookahead tree itself is not cleared when executing an action in the environment as is done for the other domains trialed. The maximum episode length is capped at 18,000 frames with a frame skip of 5 this equals 3,600 actions. Using a simulation based cost-to-go approximation is infeasible with a simulator budget of 100 and the maximum episode length of 3,600 actions. Therefore we report the algorithms using a heuristic cost-to-go estimate, which is the the number of gates that have either been missed or are still left times the time penalty of 500 centiseconds. For the RIW algorithms we use the pixel values from the current gray scaled screen at full resolution, that is 180 by 210 pixels, as features. All experiments were run within the OpenAI gym frame- 1.7 ± 0.1 1.3 ± 0.1 1.1 ± 0.1 work BID6 ) and the code used for the algorithms and domains is available through GitHub 6. The different H functions reported here are H NA = 0, the random policy H Rnd, and the Manhattan distance H Man. The algorithms were also evaluated using Knuth's algorithm with a different range of rollouts for the cost-to-go estimate, however, the are not reported here as they are either statistically indifferent or dominated by the using H Rnd with a single rollout. BID4 suggests that MCTS algorithms should readily benefit from stronger algorithms to estimate costs-to-go by simulation of stochastic policies. Our experiments showed that if synergies exist these do not manifest when using off-the-shelf stochastic estimation techniques like the ones discussed by BID19. TAB2, 2 and 3 report the of the different lookahead algorithms on the GridWorld domain variants with a stationary goal, moving goals and obstacles respectively. For these three domains, were also collected for a 100x100 grid, however, the were omitted from the tables as the simulator budgets used were not sufficient to find anything meaningful. The on the stationary goal GridWorld domain shown in TAB2 provide a number of insights about the rollout algorithms reported. First, we can see RIW benefits from using H Rnd rather than H NA where the larger simulator budgets are used. As the simulator budget increases, as could be expected, so does the performance of all the methods using H Rnd. On the contrary, with H NA RIW's performance remains constant across the different budgets. The explanation for this can be found in the motivating example we gave previously in this paper with the agent preferring the shorter trajectory of driving into the boundary of the grid. TAB2 also shows that given the largest budget and H Rnd, RIW statistically outperforms the other algorithms on the three domains of different size. Table 2 for GridWorld with moving goals has similar patterns as the stationary goal domain in that RIW with H Rnd dominates performance for the largest budget. Also, the majority of performances for the smaller budgets, excluding RIW 199.0 ± 1.9 193.1 ± 5.0 190.2 ± 6.0 Table 6: Same settings as TAB6 over Combolock. GridWorld with a stationary goal and obstacles displayed in Table 3 continues the trend of . Using the largest budget RIW with H Rnd outperforms all methods on the 10x10 and 20x20 domains. For the 50x50 a number of are statistically indifferent. For this domain the algorithms using H Man as the base heuristic are also reported. While using the largest budget on the 10x10 grid H Rnd dominates H Man, for the larger 50x50 we see H Man dominates H Rnd for UCT, and is competitive for the other methods. For the smallest simulator budget on CTP reported in Table 4 using H Man with RIW and UCT are the dominate methods. For the largest simulator budget RIW using H Rnd is dominant over all other methods for both sized domains. We also see that in most cases for the two smaller budgets H Man dominates the H Rnd methods. TAB6 and 6 show on the smaller 10 state domains RIW with H Rnd is statistically dominant over all other methods on Antishaping and Combolock for the 500 and 1000 simulator budgets. However, for the more complex 50 state domains, the of all algorithms using H Rnd are statistically indifferent. It can be observed that using H Rnd with RIW does improve its performance compared with H NA across all the domain settings with simulator budgets of 500 and 1000, besides Antishaping with 50 states. For the Skiing Atari-2600 game in Table 7 H Heu is the heuristic value based on the number of gates missed and remaining as described in the previous section. RIW using H Heu dominates all other methods. Comparing RIW using H Heu with those reported by BID17, it has similar performance to the DQN algorithm BID18 after 100 million frames of training. Since the simulation budget per action we use here is equivalent to 500 frames, and given that the maximum episode duration spans 3,600 actions, RIW achieves the performance in Table 7 considering only 1.8 million frames. considers AlphaGo Zero BID20 to be state-of-the-art in MCTS algorithms. It combines the reasoning over confidence intervals first introduced with UCT BID14 and the classic simulation of base policies BID10, adding to both supervised learning algorithms to obtain, offline, parametric representations of costs-to-go which are efficient to evaluate. The ing algorithm achieves super-human performance at the game of Go, long considered too hard for AI agents. Rather Table 7: Average and 95% confidence interval for the cost on the Atari-2600 Skiing game over 100 episodes.than using descriptions of game states directly as we do, AlphaZero uses a CNN to extract automatically features that describe spatial relations between game pieces. Like us, AlphaZero's lookahead uses a stochastic policy to select what paths to expand, but rather than Q-factors, uses estimated win probabilities to prioritise controls, and simulates the opponent strategy via self-play to generate successor states. Our simulators are always given and remain unchanged, rather than being dynamic as is the case for AlphaZero. BID11 have recently presented a hybrid planning and learning approach that integrates BID1 rollout, with a deep neural network. Similarly to AlphaZero, they use it to both guide the search, and also to extract automatically the set of state features F. Interestingly, Junyent et al.'s work does not use deep neural networks to approximate costs-to-go as AlphaZero does. A significant improvement in performance over Bandres et al. original rollout algorithm is reported with policies learnt after 40 million interactions with the simulator, using an overall computational budget much bigger than the one used by Bandres et al. MCTS approaches typically combine lookaheads and cost-togo approximations, along with statistical tests to determine what are the most promising directions and focus their sampling effort. The width-based methods described in this paper do so too, but in ways which are, at first sight, orthogonal to existing strategies. It remains an area of active research to map out exactly how the width-based methods described in this paper, and those elsewhere by BID11 too, provide alternatives to the limitations of existing MCTS approaches. Having said this, there is no general theory guiding the design of MCTS algorithms BID4, and to avoid generating ad-hoc, problem dependent solutions involuntarily it is important to follow strict protocols that alert of potential lack of statistical significance in , while relying on a diverse set of benchmarks that can be both easily understood, and highlight limitations of existing state-of-theart methods and overcome them.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgdVWPTv4
We propose a new Monte Carlo Tree Search / rollout algorithm that relies on width-based search to construct a lookahead.
Deep Neural Networks (DNNs) are known for excellent performance in supervised tasks such as classification. Convolutional Neural Networks (CNNs), in particular, can learn effective features and build high-level representations that can be used for classification, but also for querying and nearest neighbor search. However, CNNs have also been shown to suffer from a performance drop when the distribution of the data changes from training to test data. In this paper we analyze the internal representations of CNNs and observe that the representations of unseen data in each class, spread more (with higher variance) in the embedding space of the CNN compared to representations of the training data. More importantly, this difference is more extreme if the unseen data comes from a shifted distribution. Based on this observation, we objectively evaluate the degree of representation’s variance in each class by applying eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour. This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class. We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification . To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that significantly reduces the within-class covariance of a DNN’s representation, improving performance on unseen test data from a shifted distribution. We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification (DCASE2016 and DCASE2017). We demonstrate that not only does DWCCA significantly improve the network’s internal representation, it also increases the end-to-end classification accuracy, especially when the test set exhibits a slight distribution shift. By adding DWCCA to a VGG neural network, we achieve around 6 percentage points improvement in the case of a distribution mismatch. Convolutional Neural Networks (CNNs) are the state of the art in many supervised learning tasks such as classification, and using the power of convolutional layers, CNNs can learn useful features that are often superior to engineered features, and build internal representations that can achieve high classification performance. It has been shown that CNNs have a surprising ability to fit data, so much so that they can even perfectly learn from data with random labels BID32. But of course, memorising the training data is not sufficient: a model is expected to generalize to unseen data points. Additionally, a robust model has to be able to not only deal with unseen data points that are similar to the training set, but also cope with unseen data points that may come from a slightly different distribution than the training data (distribution mismatch). When there is a distribution shift between the training and test sets, robustness of the model's representation becomes more important as it has to classify or embed data points that are quite different from the ones it has observed in the training set. In this paper, we investigate this by using a well-known DNN architecture (VGG BID28) that is adapted for audio classification BID9 and is widely used among researchers. We evaluate VGG on data with as well as without distribution mismatch and observe that while VGG exhibits a reasonable performance on the data without distribution mismatch, its performance significantly drops when tested on data from a shifted distribution. We start by analyzing the internal representations of the network by using visualisations. As will be seen in the first (a-c) and the 3rd rows (g-i) of FIG2, the network's internal representations in each class spread more in the embedding space for the unseen data (validation or test) compared to the training data. This is even more extreme when the unseen data comes from a shifted distribution (i).For an objective evaluation of the amount of the representation's variance in each class, we compute the within-class covariance of the representations of the network for each class, and we apply eigenvalue decomposition to compute the eigenvalues of each class's covariance matrix. We then report the sorted eigenvalues of the within-class covariance of the representations in Figure 3. As the blue curves show, the eigenvalues in unseen data of validation (b and e) and test (c and d) have considerably higher ranges than train data (a and d) for all the datasets we used. To better understand the effect of such high variance in the quality of generalisation in the representations of our network, we carry out K-nearest neighbor (KNN) experiments on the dataset without, and the dataset with distribution shift. As the in Figure 4 show, the performance degredation from validation (c) compared to test (d) in case of distribution mismatch is significantly higher compared to the performance drop from validation (a) to test (b) when the test data comes from a similar distribution. This observation is also aligned with what we observed in the visualisations from FIG2 that showed the data is more spread than validation data, when coming from a shifted distribution. To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that reformulates the conventional Within-Class Covariance Normalization (WCCN) BID12 as a DNN-compatible version. DWCCA is trained end-to-end using back-propagation, can be placed in any arbitrary position in a DNN, and is capable of significantly reducing the within-class covariance of the internal representation in a DNN.We empirically show that DWCCA significantly reduces the within-class covariance of the DNN's representations, in both cases. Further, we evaluate the generalization quality of the DNN's representations after applying DWCCA by performing nearest neighbor classification on its representations. Our show that DWCCA significantly improves the nearest neighbor classification in both cases, hence improving the generalization quality of the representations. And finally we report the end-to-end classification of the trained models on an acoustic scene classification task, using data from the annual IEEE Challenges on Detection and Classification of Acoustic Scenes and Events (DCASE). It turns out that the classification for the dataset with distribution shift are significantly improved by integrating the DWCCA layer, while the performance on the dataset without distribution mismatch stayed the same. The characteristics of the representations learned in CNNs can be influenced by many factors such as network architecture and the training data. The authors in BID19 investigated how architecture topologies affect the robustness and generalization of a representation and showed which representations from different architectures better transfer to other datasets and tasks. While the topology of the architecture influences the generality of its representations, several authors proposed methods that can improve the internal representation of a DNN. BID8 proposed a loss which learns linearly separable latent representations on top of a DNN, by maximising their smallest Linear Discriminant Analysis (LDA) BID10 eigenvalues. And in BID0, the authors propose creating a maximally correlated representation from different modalities. It has been shown that stand-alone CNNs are not very successful at generalizing to unseen data points from shifted distributions in tasks such as Acoustic Scene Classification (ASC), where such distributional shifts are common. ASC is defined as classifying environments from the sounds they produce BID1, and often these environments (e.g, home, park, city center) may sound very different in different cities or during different times of the year. Although in BID23 BID30 BID9 CNNs have shown promising in ASC when the unseen test data has a similar distribution to the training data, in BID21 BID33 similar CNNs that previously performed successfully in a matched distribution case, suffered significantly from the lack of generalization to a shifted distribution in the test set. To cope with this drawback of CNNs, BID27 investigated CNN manifold data augmentation using Generative Adversarial Networks BID11, while in BID15 BID31 BID27 the authors used an ensemble of CNNs as feature extractors and processed the deep features via Support Vector Machines (SVMs), followed by a late fusion of various models. They showed that although CNNs do not generalize well in the distribution shift case, they can learn useful features that can be incorporated for building new classifiers with better generalization properties. In this paper, we try to better understand these problems. We focus our efforts on investigating the reasons for the performance drop in CNNs when the data distribution is shifted. To this end, we analyze the internal representations in CNNs and propose DWCCA to tackle this issue. We start by introducing a common notation which will be used throughout the paper. We then first describe Within-Class Covariance Normalization (WCCN) -a classic machine learning method-, and then show how to cast it into a deep learning compatible version. Let W denote a set of N d-dimensional observations (feature vectors) belonging to C different classes c ∈ {1, ..., C}. The observations are in the present case either hand crafted features (e.g. i-vectors BID6) or any intermediate hidden representation of a deep neural network. WCCN is a linear projection that provides an effective compensation for the within-class variability and has proven to be effectively used in combination with different scoring functions BID5 BID22. The WCCN projection scales the feature space in the opposite direction of its within-class covariance matrix, which has the advantage that finding decision boundaries on the WCCN-projected data becomes easier BID13. The within-class covariance S w is estimated as: DISPLAYFORM0 where DISPLAYFORM1 Based on the definitions above we propose Deep Within-Class Covariance Analysis (DWCCA), a DNN-compatible formulation of WCCN. The parameters of our networks are optimized with Stochastic Gradient Descent (SGD), and trained with mini-batches. The deterministic version of WCCN described above is usually estimated on the entire training set, which by the definition of SGD is not available in the present case. In the following we propose a DWCCA Layer which helps to circumvent these problems. FIG1 shows a schematic sketch of how the DWCCA Layer can be incorporated in a neural network, and its gradient flow. Instead of computing the within-class covariance matrix S w on the entire training set W we provide an estimateŜ b on the observations W b of the respective mini-batches. Given this estimate we compute the corresponding mini-batch projection matrixB b and use it to maintain a moving average projection matrixB asB = (1 − α)B + αB b. This is done similarly when computing the mean and standard deviation for normalising the activations in batch normalization BID16. The hyper parameter α controls the influence of the data in the current batch b on the final DWCCA projection matrix. The output of this processing step is the DWCCA projected data W b = W bB of the respective mini-batch. The DWCCA Layer can be seen as a special dense layer with a predefined weight matrix, the projection matrixB, with the difference that the parameters are computed via the activations of the previous layer, and not learned via SGD. The proposed covariance normalization is applied directly during optimization. For training our network with back-propagation, we need to establish gradient flow. To this end, we implement the DWCCA Layer using the automatic differentiation framework Theano BID3 which already provides the derivatives of matrix inverses and the Cholesky decomposition. We refer the interested reader to BID29 for details on this derivative. In this section, we detail our experimental setup and present our empirical . To evaluate the performance of CNNs in situations with and without distribution mismatch, we use the TUT Acoustic Scenes 2016 (DCASE2016) BID26 and TUT Acoustic Scenes 2017 (DCASE2017) BID25 datasets. The DCASE2016 dataset consists of 15 different acoustic scenes: lakeside beach, bus, cafe/restaurant, car, city center, forest path, grocery store, home, library, metro station, office, urban park, residential area, train, and tram. All audio material was collected as 3-5 minutes length recordings and then was cut into segments of 30 seconds length. The DCASE2017 dataset consists of the same 15 classes and uses the same recordings in its development set (training and validation) as used for DCASE2016 development set. But these recordings were split into only 10s long segments instead of 30s, which makes the learning and classification task harder. A new evaluation set (unseen test) was recorded for DCASE2017. The reason for choosing these two datasets is that DCASE2016 and DCASE2017 are based on the same development (= training and validation) data, but have different evaluations (= independent test) sets. While the test data for DCASE2016 were collected jointly with the development data, new test recordings were collected for DCASE2017, at a later time, in different places. Hence, there is a distribution mismatch between development and test data in the case of DCASE2017, as the many environments sound different from those in the development set (which will also show, indirectly, in our experimental ). This is detailed in the report of DCASE2017 BID25.In the following, we will use DCASE2016 to represent a data scenario without distribution shift, and DCASE2017 as the distribution shift case. In all experiments, we use the official 4-fold cross validation splits provided with the datasets and report CV on all folds separately, as well as the performance of our method on the unseen test set. DISPLAYFORM0 2 × 2 Max-Pooling + Drop-Out(0.3) 3 × 3 Conv(pad-1, stride-1)-64-BN-ReLu 3 × 3 Conv(pad-1, stride-1)-64-BN-ReLu 2 × 2 Max-Pooling + Drop-Out(0.3) 3 × 3 Conv(pad-1, stride-1)-128-BN-ReLu 3 × 3 Conv(pad-1, stride-1)-128-BN-ReLu 3 × 3 Conv(pad-1, stride-1)-128-BN-ReLu 3 × 3 Conv(pad-1, stride-1)-128-BN-ReLu 2 × 2 Max-Pooling + Drop-Out(0.3) 3 × 3 Conv(pad-0, stride-Global-Average-Pooling DWCCA (if applied) 15-way Soft-Max As explained before, we use a VGG-Style CNN proposed in BID9 which uses audio spectrograms as inputs. We will refer to this model as vanilla. To evaluate the effect of our proposed method, we apply a DWCCA layer on the output of global average pooling, as can be seen in TAB0. We will refer to this model in our as dwcca. The internal representations we will analyze are computed by using the output of global average pooling layer in vanilla, and the output of the DWCCA layer in dwcca. We also provide additional baselines for comparison: We report the performance of the best end-toend CNN model BID23 in the DCASE2016 challenge BID24. This method uses an ensemble of various CNN models. For the DCASE2017 experiments, we also choose the best-performing end-to-end CNN from the DCASE2017 challenge BID25 as an additional baseline to vanilla: BID33 that uses an ensemble fusion of various ResNets BID14 using several channels of audio (left, right and difference).We report classification of our end-to-end CNNs on all folds, and on the unseen test set. For the unseen test, we average the probabilities produced by the 4 models trained on the individual training parts in CV folds from the development data. Additionally, we report calibrated where we use a linear logistic regression model in each fold to regress the prediction probabilities of each model to the true labels. This method is better known as late fusion and proved successful in BID9. The probabilistic averaging for unseen test is done similarly for late fusion, as explained above. The setup used in all of our experiments are as follows: The initial learning rate is set to 0.0001, and the ADAM optimizer is used BID18. We choose a similar α (0.1) to the one used in batchnorm. A model selection with max. patience of 20 is applied and the learning rate is halved in each step of maximum patience. The architecture of our models is specified in TAB0. All models are trained with stratified batch iterators and batch size of 75. In all our experiments, the spectrogram of the whole audio segment (30 sec in DCASE2016 and 10 sec in DCASE2017) was fed to our models. All spectrograms in our experiments are extracted with the madmom library BID4 using the mono signal, and the Lasagne library BID7 was used for building neural network architectures. The DWCCA Layer was implemented as a Lasagne-compatible layer in Theano BID2. We carried out various experiments to demonstrate the issue we encounter in ASC with CNNs. First, we train our models on DCASE2016 and DCASE2017. For a better understanding of the issues in the representation learned by the CNN, we provide a visualization of the VGG representations after being projected into 2D via Principal Component Analysis (PCA) BID17. The can be found in FIG2.For analysing the within-class covariance of the representations, we apply an eigenvalue decomposition on the covariance matrix of the representations from each class. This technique is also used in other studies such as BID20 to investigate the dynamics of learning in neural networks. These eigenvalues are a good indicator of the covariance of the representations in each class. If high, it means that the representations on that specific class have a high covariance, and if the difference between the highest eigenvalue and the lowest eigenvalue is high it also indicates that the variance of the representation in that class is not spread equally in all dimensions. These can be found in Figure 3. The shade in this plot represents the variance of eigenvalues over different classes. To indirectly assess the structure of the internal representation spaces learned, we carry out a knearest-neighbor classification experiment in these representation spaces; the K-nearest neighbor on both datasets will be shown in Figure 4.Finally, we report the end-to-end classification on both datasets, for various methods; these can be found in TAB2. For a deeper understanding of the proposed method, we additionally provided the class-wise f-measure of DWCCA and the baseline VGG (vanilla) in TAB3. In FIG2, the network's internal representations in each class are projected into 2D via PCA and each class is represented by a different colour. Looking at first (a-c) and second (d-f) row, it can be seen that for the dataset without mismatched distribution the embeddings of unseen data (validation and test) are spread less after applying DWCCA. Also comparing the unseen embeddings to the training embeddings (with lower opacity and in grey) it can be seen that the unseen embeddings projected closer to the training embeddings after applying DWCCA. Comparing third (g-i) and fourth (j-l) row, it can be seen that for the case of a distribution shift DWCCA also reduces the variance of the embeddings in each class, ing in them being embedded closer to the training embeddings (grey). This suggests that this property can improve the generalisation of the representations. We will empirically evaluate this hypothesis later in this section by applying KNN classification on the representations. Looking at Figure 3, we can see that in all plots from dataset with, and dataset without distribution shift, DWCCA significantly reduces the within-class variability. This can be observed by looking at the eigenvalues of the covariances of the representations. An interesting observation is the range of eigenvalues in vanilla: In both datasets, eigenvalues have significantly larger range on unseen data (validation and test) compared to the training data. The maximum eigenvalue in DCASE2016 is around 0.7, while the maximum eigenvalue for unseen is around 7, about 10 times more. Also the maximum eigenvalue of the train set of DCASE2017 is around 2, while the max. eigenvalue on unseen data is around 20 (10 times larger).By looking at the KNN in Fig. 4 it can be seen that in both cases (mismatch / no mismatch), the KNN classification accuracy increases by adding DWCCA. Also, while the KNN performance is in a reasonable range on the validation set of both datasets, the test accuraty in the mismatch case (DCASE2017) drops significantly compared to the validation set. Additionally it can be seen that applying DWCCA significantly improves the performance on the test set with shifted distribution, adding an improvement of about 6 percentage point, while the improvement on the test set without mismatch is around 2 percentage points. Looking at the of end-to-end classifications in TAB2, we see that the performance of vanilla on DCASE 2017 consistently and significantly improves when adding DWCCA, on all development folds as well as on the unseen test data. We observe around 6 percentage points improvement by adding DWCCA to VGG.Looking at the of the dataset without mismatch, we see that although the on all folds were improved by adding DWCCA, the on the unseen test set do not significantly change. This can be explained better by looking at FIG2: the embeddings of validation (b) and test (c) indicate that the test data is projected closer to the training set than the validation set. This observation suggests that the unseen test in DCASE2016 might be similar (even more similar than the validation data) to the training set. This can also be confirmed by looking at the of the best CNN baseline, as well as vanilla: the performances on the unseen test set are consistently higher than all the validation folds. Hence, DWCCA could not help as there was not a large generalisation gap between training and test. It is worth mentioning that both vanilla and DWCCA are single models, trained on mono single channel spectrograms and no ensemble or multi-channel features were used in these experiments. In other words, a single VGG model achieves comparable performances to an ensemble of multi-channel Resnets. We also provide class-wise f-measures on the unseen test set for both datasets in TAB3. While on the dataset without distribution shift, the average f1 stays the same by adding DWCCA in both calibrated and non calibrated models, we can observe that there is a boost of 13 percentage points on the "train" class which was the class with the lowest f1 (both calibrated and non calibrated). It seems that DWCCA does not have a significant impact on classes with high f1: "office" and "beach" which stay the highest correctly predicted classes and do not face significant changes by DWCCA.On the dataset with distribution shift, we can see a significant improvement of 4 and 7 percentage points on average f1 for non-calibrated and calibrated models, respectively. The worst class in DCASE2017 was "beach" with 32%, which was boosted by 24 and 37 percentage points for noncalibrated and calibrated models, respectively. On the other hand, the best performing class, "forest path", drops by only 2 and 3 percentage points for non-calibrated and calibrated models, respectively. From the experimental , we may thus conclude that overall, reducing the within-class covariance of representations using DWCCA in more robust performance and, in case of a large gap between training and test, DWCCA can improve the generalisation. Additionally, the networks tend to reach a more uniform performance across various classes by improving the performance on the worst classes while not significantly degrading the best performing classes. In this paper, we presented the DWCCA layer, a DNN compatible version of the classic WCCN which is used to normalize the within-class covariance of the DNN's representation and improve the performance of CNNs on data-points with shifted distributions. Using DWCCA, we improved the performance of the VGG network by around 6 percentage point when the test datapoints were from a shifted distribution. We analysed the embedding's generalisation by reporting KNN classification accuracies and showed that DWCCA also improves the generalisation of DNN representations both for with and without distribution mismatch. We also showed that large within-class covariance of representations can be a sign for bad generalisation and showed that DWCCA can significantly reduce WCC and improve generalisation of the representations. This work was supported by the Austrian Ministry for Transport, Innovation & Technology, the Ministry of Science, Research & Economy, and the Province of Upper Austria in the frame of the COMET center SCCH. The authors would like to thank Hasan Bahari of KU Leuven for feedback and fruitful discussions. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of a Titan X GPU used for this research.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
S1giWPGsjQ
We propose a novel deep neural network layer for normalising within-class covariance of an internal representation in a neural network that results in significantly improving the generalisation of the learned representations.
Generative models have proven to be an outstanding tool for representing high-dimensional probability distributions and generating realistic looking images. A fundamental characteristic of generative models is their ability to produce multi-modal outputs. However, while training, they are often susceptible to mode collapse, which means that the model is limited in mapping the input noise to only a few modes of the true data distribution. In this paper, we draw inspiration from Determinantal Point Process (DPP) to devise a generative model that alleviates mode collapse while producing higher quality samples. DPP is an elegant probabilistic measure used to model negative correlations within a subset and hence quantify its diversity. We use DPP kernel to model the diversity in real data as well as in synthetic data. Then, we devise a generation penalty term that encourages the generator to synthesize data with a similar diversity to real data. In contrast to previous state-of-the-art generative models that tend to use additional trainable parameters or complex training paradigms, our method does not change the original training scheme. Embedded in an adversarial training and variational autoencoder, our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and CelebA, while outperforming state-of-the-art methods for data-efficiency, convergence-time, and generation quality. Our code will be made publicly available. Deep generative models have gained enormous research interest in recent years as a powerful framework to learn high dimensional data in an unsupervised fashion. Generative Adversarial Networks (GANs) BID10 and Variational AutoEncoders (VAEs) are among the most dominant generative approaches. They consist of training two networks: a generator (decoder) and a discriminator (encoder), where the generator attempts to map random noise to fake data points that simulate the probability distribution of real data.. GANs are typically associated with higher quality images compared to VAEs. Nevertheless, in the process of learning multi-modal complex distributions, both models may converge to a trivial solution where the generator learns to produce few modes exclusively, as referred to by mode collapse problem. To address this, we propose utilizing Determinantal Point Processes (DPP) to model the diversity within data samples. DPP is a probabilistic model that has been mainly adopted for solving subset selection problems with diversity constraints BID21, such as video and document summarization. However, Sampling from a DPP requires quantifying the diversity of 2 N subsets, where N is the size of the ground set. This renders DPP sampling from true data to be computationally inefficient in the generation domain. The key idea of our work is to model the diversity within real and fake data throughout the training process, which does adds an insignificant computational cost. Then, We encourage producing samples of similar diversity distribution to the true-data by back-propagating the DPP metric through the generator. This way, generator explicitly learns to cover more modes of real distribution, and accordingly alleviates mode collapse. Recent approaches tackled mode-collapse in one of two different ways: improving the learning of the system to reach a better convergence point(e.g. BID28 ; BID0); or explicitly enforcing the models to capture diverse modes or map back to the true-data distribution (e.g. BID37 ; BID2). Here we focus on a relaxed version of the former, where we use the same learning paradigm of the standard GANs and only change the objective function. The advantage of such an approach is to avoid adding any extra trainable parameters to the trained system while maintaining the same back-propagation steps as the standard GANs. Thus, our model converges faster to a fair equilibrium point where the generator captures the diversity of the true-data distribution while preserving the quality of generations. Contribution. We introduce a new loss function, that we denote Generative Determinantal Point Processes (GDPP) loss. Our loss only assumes an access to a generator G, a feature extraction function φ(·), and sampler from true data distribution p d. The loss encourages the generator to diversify generated samples that match the diversity of real data. This criterion can be considered as a complement to the original adversarial loss which attempts to learn an indistinguishable distribution from the true-data distribution without being specific to diverse modes. We assess the performance of GDPP on three different synthetic data environments, while also verifying the superiority on three real-world images datasets. We compared our approach with state-of-the-art approaches of more complex architectures and learning paradigms. Experiments show that our method outperforms all competing methods in terms of alleviating modecollapse and generations quality. Among the tremendous amount of work that tackles the training challenges of Generative Adversarial Networks (GANs), a few methods stood out as significant contributions towards addressing the problem of mode collapse. Methods that map the data back to noise. BID4 are one of the earliest methods that proposed learning a reconstruction network besides learning the deep generative network. Adding this extra network to the system aims at reversing the action of the generator by mapping from data to noise. Likelihood-free variational inference (LFVI) BID38, merge this concept with learning implicit densities using hierarchical Bayesian modeling. Ultimately, VEEGAN BID37 used the same concept, but the authors did not base their reconstruction loss on the discriminator. This has the advantage of isolating the generation process from the discriminator's sensitivity to any of the modes. BID2 proposed several ways of regularizing the objective of adversarial learning including geometric metric regularizer, mode regularizer, and manifold-diffusion training. Mode regularization specifically has shown a potential into addressing the mode collapse problem and stabilizing the GANs training in general. Methods that provide a surrogate objective function. on the other hand propose with InfoGAN an information-theoretic extension of GANs that obtains disentangled representation of data by latent-code reconstitution through a penalty term in its objective function. InfoGAN includes an autoencoder over the latent codes; however, it was shown to have stability problems similar to the standard GAN and requires stabilization tricks. BID8 base the ModeGAN method on the assumption of the availability of sufficient samples of every mode on the training data. In particular, if a sample from the true data distribution belongs to a particular mode, then the generated fake sample is likely to belong to the same mode. The Unrolled-GAN of BID28 propose a novel objective to update the generator with respect to the unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective. It has been shown to improve the generator training process which in turn helps to reduce the mode collapse problem. Generalized LS-GAN of BID7 define a pullback operator to map generated samples to the data manifold. BID39, with a similar philosophy, additionally draws samples from a mixture of Gaussians instead of a single Gaussian. There is, however, no specific enforcement to diversify samples. Spectral normalization strategies have been recently proposed in the works of BID29 and SAGAN BID40 to further stabilize the training. We note that these strategies are orthogonal to our contribution and could be implemented in conjunction with ours to further improve the training stability of generator models. Finally, improving the Wasserstein GANs of, WGAN-GP BID11 introduce a gradient penalization employed in state-of-the-art systems BID18.Methods use multiple generators and discriminators. One of the popular methods to reduce mode collapse is using multiple generator networks to provide a better coverage of the true data distribution. BID24 propose using two generators with shared parameters to learn the joint distribution of the data. The two generators are trained independently on two domains to ensure a diverse generation. However, sharing the parameters guide both the generators to a similar subspace. Also, BID6 propose a similar idea of multiple discriminators that are being an ensemble, which was shown to produce better quality samples. Recently, BID8 proposed MAD-GAN which is a multi-agent GAN architecture incorporating multiple generators and one discriminator. Along with distinguishing real from fake samples, the discriminator also learns to identify the generator that generated the fake sample. The learning of such a system implies forcing different generators to learn unique modes, which helps in a better coverage of data modes. DualGAN of BID30 improves the diversity within GANs at the additional requirement of training two discriminators. In contrast to these approaches, our DPP-GAN does not require the training of extra networks which provides an easier and faster training as well as being less susceptible to overfitting. Finally, we also refer to BID23 as another approach addressing mode collapse. They do that by modifying the discriminator input with concatenated samples to better sample the diversity within real data. Nevertheless, such approach is subject to memory and computational constraints as a of the significant increase in batch size. DPP is a probabilistic measure that was introduced in quantum physics BID26 and has been studied extensively in random matrix theory BID15. It provides a tractable and efficient means to capture negative correlation with respect to a similarity measure, that in turn can be used to quantify the diversity within a subset. A key characteristic in DPP that the model is agnostic about the order of items as pointed out by BID9, and therefore can be used to model data that is randomly sampled from a certain distribution. A point process P on a ground set V is a probability measure on the power set of V (i.e., 2 N), where N = |V| is the size of the ground set. A point process P is called determinantal if, given a random subset Y drawn according to P, we have for every S ⊆ Y, DISPLAYFORM0 for some symmetric similarity kernel L ∈ R N ×N, where L S is the similarity kernel of subset S. L must be real, positive semidefinite matrix L I (all the eigenvalues of L is between 0 and 1); since it represents a probabilistic measure and all of its principal minors must be non-negative. L is often referred to as the marginal kernel because it contains all the information needed to compute the probability of any subset S being selected in V. L S denotes the sub-matrix of L indexed by S, specifically, L S ≡ [L ij]; i, j ∈ S. Hence, the marginal probability of including one element e i is p(e i ∈ Y) = L ii, and two elements DISPLAYFORM1 A large value of L ij reduces the likelihood of both elements to appear together. BID20 proposed decomposing the kernel L S as a Gram matrix: DISPLAYFORM2 Here q(e i) ≥ 0 may be seen as a quality score of an item e i in the ground set V, while φ i ∈ R D; D ≤ N and ||φ i || 2 = 1 is used as an 2 normalized feature vector of an item. In this manner, φ i φ j ∈ [−1, 1] is evaluated as a "normalized similarity" between items e i and e j of V, and the kernel L S is guaranteed to be real positive semidefinite matrix. Geometric interpretation: det(φ(S) φ(S)) = i λ i, where λ i is the i th eigen value of the matrix φ(S) φ(S). Hence, we may visualize that DPP models diverse representations of data because the determinant of φ(S) φ(S) corresponds to the volume in n-D represented by the multiplication of the variances of the data (i.e., the eigen values). DPP has proven to be a tremendously valuable tool when addressing the problem of diversity enforcing such as document summarization (e.g., BID21 ; Hong (a) Given a generator G, and feature extraction function φ, the diversity kernel is constructed as LS = φ · φ. We use the last feature map of the discriminator in GAN or the encoder in VAE as the feature representation φ of a fake/real batch.(b) Using φ obtained from generated samples, we model their diversity using LS B. We also model the diversity of a real batch by extracting its features and constructing its diversity kernel LD B. Adversarial loss aims at generating similar data points to the real, and diversity loss aims at matching fake data diversity kernelLS B to real data diversity kernel LD B. We draw inspiration from DPP to model a subset diversity using a kernel. During training, we extract the feature representation of real and fake batches φ real and φ f ake. Then, we construct their diversity kernels: LS B, LD B. Our loss encourages G to synthesize data of a diversity LS B similar to the real data diversity LD B.& Nenkova FORMULA0 ), pose estimation (e.g., BID12) and video summarization (e.g., BID9 ; BID27). For instance, BID41 proposed to learn the two parameters q, φ in eq. 2 to quantify the diversity of the kernel L S using MLPs based on spatio-temporal features of the video to perform summarization. Recently, BID16 proposed to use DPP to automatically create capsule wardrobes, i.e. assemble a minimal set of items that provide maximal mix-and-match outfits given an inventory of candidate garments. As illustrated in Fig. 1b, our GDPP loss encourages the generator to sample fake data of diversity similar to real data diversity. The key challenge is to model the diversity within real data and fake data. We discussed in Sec. 3 how DPP is used to model the negative correlation within a discrete data distribution, which is commonly employed as a measure of diversity. Thus, we construct a DPP kernel for both the real data and the generated samples at every iteration of the training process as shown in Fig. 1a. Then, we encourage the network to generate samples that have a similar diversity kernel to that of the training data. To simplify the optimization process, we choose to match the eigenvalues and eigenvectors of the fake data DPP kernel with their corresponding of the real data DPP kernel. Eigenvalues and vectors capture the manifold structure of both real and fake data, and hence renders the matching problem simpler. During training, a generative model G produces a batch of samples S B = {e 1, e 2, · · · e B}, where B is the batch size. Our aim is to produce S B that is probabilistically sampled following the DPP which satisfies: DISPLAYFORM0 such that Y is a random variable representing a subset drawn with a generative point process P, and L S B is the kernel matrix of the subset indexed by S, as detailed in Sec. 3. Connecting DPP to the data generation, we assume that G is the point process sampler that generates subset S B according to P. Let φ(S B) ∈ R d×B be a feature representation of the generated subset S B, where φ(·) is a feature extraction function. Therefore, the DPP kernel is constructed as follows: DISPLAYFORM1 where z B ∈ R dz×B is noise vector inputted to the generator G ing in the generated subset S B. Let us denote φ(S B) a feature representation of a generated batch and φ(D B) a feature representation of a true batch. Our aim is to match DISPLAYFORM2, where λ i real and λ i f ake are the i th eigenvalues of L D B and L S B respectively. Thus, our problem is reduced to learn a fake diversity kernel L S B close to the real diversity kernel L D B. We choose to match those kernels using their major characteristics: eigenvalues and eigenvectors. Our GDPP loss is composed of two components: diversity magnitude loss L m, and diversity structure loss L s as follows: Scaling the structure loss aims to induce noise invariance within the eigenvectors similarity learning. This can be seen as alleviating the effect of outlier structures that intrinsically exist within the real data on learning the diversity kernel of fake data. We note that all the eigenvalues of L S B and L S D will be real non-negative since both of the kernels are symmetric semi-positive definite. Therefore, the kernels represent a probability measure since none of the principal minors will be negative. DISPLAYFORM3 Integrating GDPP loss with GANs. For a primary benchmark, we integrate our GDPP loss with GANs. Since our aim is to avoid adding any extra trainable parameters, we utilize features extracted by the discriminator. We choose to use the hidden activations before the last layer as our feature extraction function φ. We apply 2 normalization on the obtained features that guarantees constructing a positive semi-definite matrix according to eq. 2. We finally integrate L DP P g into the GAN objective by only modifying the generator loss of the standard adversarial loss BID10 as follows: DISPLAYFORM4 Integrating GDPP loss with VAEs. A key property of our loss is its generality to any generative model. Beside incorporating our loss in GANs, we prove it can be also embedded within Variational Auto-Encoders (VAEs) proposed in BID19. We use the decoder network as our generator G and the final hidden activations within the encoder network as our feature extraction function φ.To compute L DP P at the training time, we feed an input training batch D B to the encoder constructing L D B. We also feed a random Gaussian noise to the decoder that generates a fake batch S B, which we then feed to the encoder to construct L S B. Finally, we compute L DP P as stated in eq. 2 using the 2 normalized features, then add it to the original VAE loss at the training time as follows: DISPLAYFORM5 5 EXPERIMENTSIn our experiments, we target evaluating the generation based on two criteria: mode collapse and generated samples quality. Due to the intractability of log-likelihood estimation, this problem tends to be non-trivial in real data. Therefore, we start by analyzing our method on synthetic data where we can accurately evaluate the performance. Then, we demonstrate the effectiveness of our method on real data using standard evaluation metrics. We use the same architecture and data on all the competing methods (See appendix A for details). Mode collapse and the quality of generations can be explicitly evaluated on synthetic data since the true distribution is well-defined. In this section, we evaluate the performance of the methods on mixtures of Gaussian of known mode locations and distribution (See appendix B for details). We use the same architecture for all the models, which is the same one used by BID28 and BID37. We note that the first four rows in TAB0 are obtained from BID37, since we are using the same architecture and training paradigm. Fig. 2 effect of each method on the 2D Ring and Grid data. As shown by the vanilla-GAN in the 2D Ring example (Fig. 2a), it can generate the highest quality samples however it only captures a single mode. On the other extreme, the WGAN-GP on the 2D grid (Fig. 2k) captures almost all modes in the true distribution, but this is only because it generates highly scattered samples that do not precisely depict the true distribution. GDPP-GAN (Fig. 2f,l) creates a precise representation of the true data distribution reflecting that the method learned an accurate structure manifold. Performance Evaluation: At every iteration, we sample fake points from the generator and real points from the given distribution. Mode collapse is quantified by the number of real modes recovered in fake data, and the generation quality is quantified by the % of High-Quality Samples. A generated sample is counted as high-quality if it was sampled within three standard deviations in case of 2D Ring or Grid, and ten standard deviations in case of the 1200D data. We train all models for 25K iterations, except for VEEGAN which is trained for 100K iterations to properly converge. At inference time, we generate 2500 samples from each of the trained models and measure both metrics. We report the numbers averaged over five runs with different random initialization in TAB0. GDPP-GAN clearly outperforms all other methods, for instance on the most challenging 1200D dataset that was designed to mimic a natural data distribution, bringing a 63% relative improvement in high-quality samples and 15% in mode detection over its best competitor WGAN-GP.Ablation Study: We run a study on the 2D Ring and Grid data to show the individual effects of each component in our loss. As shown in TAB2, optimizing the determinant det L S directly increases the diversity generating the highest quality samples. This works best on the 2D Ring since the true data distribution can be represented by a repulsion model. However, for more complex data such as the 2D Grid, optimizing the determinant fails because it does not well-represent the real manifold structure but aims at repelling the fake samples from each other. Learning the unnormalized structure is prone to outlier structures introduced by the noise in the data and in the learning process. However, when scaling the structure loss by the true-data eigenvalues seems to better disengage the model from noise that exists within the true-data features and only focus on learning the prominent structure. We evaluate the amount of training data needed by each method to reach the same local optima as evaluated by our two metrics on both the 2D Ring and Grid data. Since we are sampling the true-data from a mixture of Gaussians, we can generate an infinite size of training data. DISPLAYFORM0 Figure 2: Scatter plots of the true data(green dots) and generated data(blue dots) from different GAN methods trained on mixtures of Gaussians arranged in a ring (top) or a grid (bottom). Therefore, we can quantify the amount of the training data by using the batch-size while fixing the number of back-propagation steps. In this experiment FIG2, we run all the methods for the same number of iterations and vary the batch size. However, WGAN-GP tends to capture higher quality samples with fewer data. In the case of 2D Grid data, GDPP-GAN performs on par with other methods for small amounts of data, yet it tends to significantly outperform other methods on the quality of generated samples once trained on enough data. Time-Efficiency: Another property of interest is which method converges faster given the same amount of training data. In this experiment, we fix the batch size at 512 and train the models for a variable number of iterations FIG2. For the 2D Ring, Only VEE-GAN captures a higher number of modes before GDPP-GAN, however, they are of much lower quality than the ones generated by GDPP-GAN. In the 2D Grid data, GDPP-GAN performs on par with unrolled-GAN for the first 5,000 iterations while the others are falling behind. After that, our method significantly outperforms all the methods with respect to both the number of captured modes and the quality of generated samples. We also shows that the GDPP-GAN has an indistinguishable time cost over the DCGAN in Table 6, rendering it the fastest over other baselines. We use the experimental setting of state-of-the-art BID11 and BID28 for evaluating models on the Stacked MNIST and CIFAR10. On CelebA, we use the experimental setting of state-of-the-art BID17. Nonetheless, we investigated the robustness of our method by using a more challenging setting proposed by BID37 and we show its in Table 5 of Appendix C. In our evaluation, we focus on comparing with state-of-theart method that adopt a change in the original adversarial loss. Nevertheless, many of them can be deemed orthogonal to our contribution, and can enhance the generation if integrated with our approach. We also show that our method is robust to random initialization in Section C. Table 3: Performance of various methods on real datasets. Stacked-MNIST is evaluated using the number of captured modes (Mode Collapse) and KL-divergence between the generated class distribution and true class distribution (Quality of generations). CIFAR-10 is evaluated by Inference-via-Optimization (Mode-Collapse) and Inception-Score (Quality of generations). Stacked-MNIST A variant of MNIST designed to increase the number of discrete modes in the data. The data is synthesized by stacking three randomly sampled MNIST digits along the color channel ing in a 28x28x3 image. In this case, Stacked MNIST has 1000 discrete modes corresponding to the number of possible triplets of digits. Following BID11, we generate 50,000 images that are later used to train the networks. We train all the models for 15,000 iterations, except for DCGAN and unrolled-GAN that need 30,000 iterations to converge to a reasonable local-optima. We follow BID37 to evaluate methods on the number of recovered modes and the divergence between the true and fake distributions. We sample 26000 fake images for all the models. We identify the mode of each generated image by using the classifier mentioned in BID2 that is trained on the standard MNIST dataset to classify each channel of the fake sample. The quality of samples is evaluated by computing the KL-divergence between the generated label distribution and the training labels distribution. GDPP-GAN captures all the modes and generates a fake distribution that has the lowest KL-Divergence with the true-data. Moreover, when applied on the VAE, it doubles the number of modes captured (623 vs 341) and cuts the KL-Divergence to half (1.3 vs 2.4).We note that we run a separate experiment on MNIST in Section C.4 to assess the severity of mode collapse following BID34. We evaluate the methods on CIFAR-10 after training all the models for 100K iterations. Unlike Stacked-MNIST, the modes are intractable in this dataset. To assess the performance on this dataset, we use two metrics: Inception Score for the generation quality and Inference-viaOptimization for diversity. As shown in the Quantitative on CIFAR and Stacked MNIST (Table 3), GDPP-GAN consistently outperforms all other methods in both mode collapse and developing higher quality samples. When applying the GDPP on the VAE, it reduces the IvO by 63%, however, we note that both the inception-scores are considerably low which is also observed by BID36 when applying the VAE on CIFAR-10.Inference-via-optimization BID28, has been used to assess the severity of mode collapse in the generator by providing a metric to compare real images with the nearest generated image. In the case of mode collapse, there are some real images for which this distance is large. We measure this metric by sampling a real image x from the test set of real data. Then we optimize the 2 loss between x and generated image G(z) by modifying the noise vector z. If a method attains low MSE, then it can be assumed that this method captures more modes than ones that attain a higher MSE. FIG3 presents some real images with their nearest optimized generations. Randomly generated sample images can be seen in Appendix D. As demonstrated by BID37, this metric can be fooled by producing blurry images out of the optimization. That is why the inception score is necessary for this evaluation. Inception score BID35 is widely used as a metric for assessing the quality of images. It bases its validity from the premise that every realistic image should be recognizable by a standard architecture(e.g., Inception Network). Ideally, meaning that the score distribution for it must be dominated by one class. We also assess the stability of the training, by calculating the inception score at different stages while training on CIFAR-10 FIG4. Evidently, DCGAN has the least stable training with a high variation. However, by only adding GDPP penalty term to the generator loss, model generates high-quality images the earliest on training with a stable increase. CelebA Finally, to evaluate the performance of our loss on large-scale Adversarial training, we train Progressive-Growing GANs BID17 We show the effect of embedding our loss in adversarial training by adding it to the WGAN-GP this time instead of DCGAN loss, which is as well orthogonal to our loss. We train the model for 40K iterations corresponding to 4 scales up to 64 × 64 on CelebA dataset BID25. Unlike CIFAR-10, CelebA dataset does not simulate ImageNet because it only contains faces not natural scenes/objects. Therefore, using a model trained on ImageNet as a basis for evaluation (i.e., Inception Score), will cause inaccurate recognition. On the other hand, IvO operates by optimizing the noise vector to match real image. However, large scale datasets requires larger noise vector to cover all the synthetic manifold. This renders the optimization prone to divergence or convergence to poor local optimas; jeopardizing the metric effectiveness. We follow BID17 to evaluate the performance on CelebA using Sliced Wasserstein Distance (SWD) BID32. A small Wasserstein distance indicates that the distribution of the patches is similar, which entails that real and fake images appear similar in both appearance and variation at this spatial resolution. Accordingly, SWD metric can evaluate the quality of images as well as the severity of mode-collapse on large-scale datasets such as CelebA. TAB4 shows the average and minimum SWD metric across the last 10K training iterations. We chose this time frame because it shows a saturation in the training loss for all methods. For qualitative examples, refer to Fig. 11 in Appendix D. In this work, we introduce a novel criterion to train generative networks on capturing a similar diversity to one of the true data by utilizing Determinantal Point Process(DPP). We apply our criterion to Generative Adversarial training and the Variational Autoencoder by learning a kernel via features extracted from the discriminator/encoder. We train the generator on optimizing a loss between the fake and real, eigenvalues and eigenvectors of this kernel to simulate the diversity of the real data. Our GDPP framework accumulates many desirable properties: it does not require any extra trainable parameters, it operates in an unsupervised setting, yet it consistently outperforms stateof-the-art methods on a battery of synthetic data and real image datasets as measure by generation quality and invariance to mode collapse. Furthermore, GDPP-GANs exhibit a stabilized adversarial training and has been shown to be time and data efficient as compared to state-of-the-art approaches. Moreover, the GDPP criterion is architecture and model invariant, allowing it to be embedded with any variants of generative models such as adversarial feature learning or conditional GANs. The architectures of the generator and discriminator networks employed in our experiments are given in FIG5. Tan h In all of our experiments, we use Adam Optimizer with β 1 = 0.5 and = 1×10 −8. For the synthetic data experiments, we follow the configurations used by BID37 and BID28. We use 1 × 10 −4 for the discriminator learning rate, and 1 × 10 −3 for the generator learning rate. For synthetic data we use a batch size of 512. For Stacked-MNIST and CIFAR-10 we use a batch size of 64. For CIFAR-10, we use a batch size of 16.For the Stacked MNIST, CIFAR-10 and CelebA datasets, we use 2 × 10 −4 as the learning rate for both of the generator and the discriminator. To relatively stabilize the training of DCGAN, we follow the protocol in BID11 to train it by applying a learning rate scheduler. The decay is to happen with a ratio of 1/(#max − iters) at every iteration. The first data collection is introduced in BID28 as a mixture of eight 2D Gaussian distributions arranged in a ring. This distribution is the easiest to mimic since it only requires the generated data to have an equal repulsion from the center of the distribution, even if it is not targeted to the modes. The second and third collections were introduced by BID37. In the second collection, there is a mixture of twenty-five 2D Gaussian distributions arranged in a grid. Unlike the first collection, this one requires a more structured knowledge of the true data modes' locations. The last collection is a mixture of ten 700 dimensional Gaussian distributions embedded in a 1200 dimensional space. This mixture arrangement mimics the higher dimensional manifolds of natural images, and demonstrates the effectiveness of each method on manipulating sparse patterns. Since the weights of the generator are being initialized using a random number generator N, the of a generative model may be affected by poor initializations. In FIG6 we show qualitative examples on 2D Grid data, where we use high standard deviation for the random number generator (> 100) as an example of poor initializations. Evidently, GDPP-GAN respects the structure of the true data manifold even with poor initializations. On the other extreme, WGAN-GP tends to map the generated data to a disperse distribution covering all modes but with low quality generations. To further show the effectiveness of our approach, we examine it under a more challenging experimental setting. The experimental setting of BID37 entails an architecture and hyperparameters that produce relatively poor as compared with the setting of Table 3. For example, In BID37 setting, DCGAN produces 99 modes, while in our experimental setting, DCGAN produces 427 modes on Stacked MNIST dataset. We note that our main in Table 3 are computed using the same experimental setting suggested by BID11 and BID28 ) on a more realistic architecture. Our method remains to have a clear advantage when compared to the rest of the methods for both CIFAR-10 and Stacked-MNIST (e.g., covering 90.6% more modes on Stacked-MNIST from 150 to 286 and at a higher quality). We obtain the first four rows from BID37. CIFAR-10 #Modes (Max 1000) KL div. IvO DCGAN 99 3.4 0.00844 ALI 16 5.4 0.0067 Unrolled-GAN BID28 48.7 4.32 0.013 VEEGAN BID37 150 2.95 0.0068 GDPP-GAN (Ours) 286 2.12 0.0051 Table 5: Performance on real datasets using the challenging experimental setting of BID37. GDPP-GAN remains to outperform all baselines on both Stacked-MNIST and CIFAR-10 for all metrics. Eigendecomposition of an n×n matrix requires O(n 3 +n 2 log 2 nlogb) runtime within a relative error bound of 2 −b as shown in BID31. In our loss, we perform two eigendecompositions: L S B, L D B corresponding to the fake and true DPP kernels respectively. Therefore, the runtime analysis of our loss is O(n 3), where n is the batch size. Normally the batch size does not exceed 1024 for most training paradigms due to memory constraints. In our experiments, we used 512 for synthetic data and 64 or 16 for real data. Hence, the eigendecomposition does not account for a significant delay in the method. To further verify this claim, we measured the relative time that eigendecompositions take of each iteration time. We obtained 11.61% for Synthetic data, 9.36% for Stacked-MNIST data and 8.27% for CIFAR-10. We also show the average iteration running time of all baselines in Table 6. We computed the average of 1000 iterations across 5 different runs. Our method is the closest to the standard DCGAN running time, and faster than the rest of baselines by a large margin. Table 6: Average Iteration time for each of the baseline methods on CIFAR-10. GDPP-GAN obtains the closest time to the default DCGAN.C.4 NUMBER OF STATISTICALLY-DIFFERENT BINS (NDB) BID34 proposed to use a new evaluation metric to assess the severity mode collapse severity in a generative model. They based their metric on a simple observation: In two sets of samples that represent the same distribution, number of samples that fall into a given bin should be the same up to a sampling noise. In other words, if we clustered the true-data distribution and fake-data distribution to the same number of clusters/bins, then the number of samples from each distribution in every bin should be similar. We follow BID34 to compute this metric on MNIST dataset, and compare our method with their in Table 7. We note that we used their open-source implementation of the metric, and we obtained the first three rows from their paper. We use 20,000 samples from our model and the training data to compute the NDB/K. Model K=100 K=200 K=300 TRAIN 0.06 0.04 0.05MFA BID34 0.14 0.13 0.14 DCGAN 0.41 0.38 0.46 WGAN 0.16 0.20 0.21 GDPP-GAN 0.11 0.15 0.12 Table 7: NDB/K -numbers of statistically different bins, with significance level of 0.05, divided by the number of bins K (lower is better). (a) GDPP-GAN after 15K iterations.(b) GDPP-VAE after 45K iterations.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1x8WnA5Ym
The addition of a diversity criterion inspired from DPP in the GAN objective avoids mode collapse and leads to better generations.
Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization. In this work we suggest a novel complexity measure based on unit-wise capacities ing in a tighter generalization bound for two layer ReLU networks. Our capacity bound correlates with the behavior of test error with increasing network sizes (within the range reported in the experiments), and could partly explain the improvement in generalization with over-parametrization. We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks. Deep neural networks have enjoyed great success in learning across a wide variety of tasks. They played a crucial role in the seminal work of , starting an arms race of training larger networks with more hidden units, in pursuit of better test performance . In fact the networks used in practice are over-parametrized to the extent that they can easily fit random labels to the data . Even though they have such a high capacity, when trained with real labels they achieve smaller generalization error. Traditional wisdom in learning suggests that using models with increasing capacity will in overfitting to the training data. Hence capacity of the models is generally controlled either by limiting the size of the model (number of parameters) or by adding an explicit regularization, to prevent from overfitting to the training data. Surprisingly, in the case of neural networks we notice that increasing the model size only helps in improving the generalization error, even when the networks are trained without any explicit regularization -weight decay or early stopping (; ; c). In particular, Neyshabur et al. (2015c) observed that training on models with increasing number of hidden units lead to decrease in the test error for image classification on MNIST and CIFAR-10. Similar empirical observations have been made over a wide range of architectural and hyper-parameter choices (; ;). What explains this improvement in generalization with over-parametrization? What is the right measure of complexity of neural networks that captures this generalization phenomenon?Complexity measures that depend on the total number of parameters of the network, such as VC bounds, do not capture this behavior as they increase with the size of the network. Existing works suggested different norm, margin and sharpness based measures, to measure the capacity of neural networks, in an attempt to explain the generalization behavior observed in practice (b; ; ; ; ; We observe that even when after network is large enough to completely fit the training data(reference line), the test error continues to decrease for larger networks. Middle panel: Training fully connected feedforward network with single hidden layer on CIFAR-10. We observe the same phenomena as the one observed in ResNet18 architecture. Right panel: Unit capacity captures the complexity of a hidden unit and unit impact captures the impact of a hidden unit on the output of the network, and are important factors in our capacity bound (Theorem 1). We observe empirically that both average unit capacity and average unit impact shrink with a rate faster than 1/ √ h where h is the number of hidden units. Please see Supplementary Section A for experiments settings. BID0; BID0. In particular, showed a margin based generalization bound that depends on the spectral norm and 1,2 norm of the layers of a network. However, as shown in and in FIG6, these complexity measures fail to explain why over-parametrization helps, and in fact increase with the size of the network. numerically evaluated a generalization bound based on PAC-Bayes. Their reported numerical generalization bounds also increase with the increasing network size. These existing complexity measures increase with the size of the network, even for two layer networks, as they depend on the number of hidden units either explicitly, or the norms in their measures implicitly depend on the number of hidden units for the networks used in practice To study and analyze this phenomenon more carefully, we need to simplify the architecture making sure that the property of interest is preserved after the simplification. We therefore chose two layer ReLU networks since as shown in the left and middle panel of FIG0, it exhibits the same behavior with over-parametrization as the more complex pre-activation ResNet18 architecture. In this paper we prove a tighter generalization bound (Theorem 2) for two layer ReLU networks. Our capacity bound, unlike existing bounds, correlates with the test error and decreases with the increasing number of hidden units, in the experimental range considered. Our key insight is to characterize complexity at a unit level, and as we see in the right panel in FIG0 these unit level measures shrink at a rate faster than 1/ √ h for each hidden unit, decreasing the overall measure as the network size increases. When measured in terms of layer norms, our generalization bound depends on the Frobenius norm of the top layer and the Frobenius norm of the difference of the hidden layer weights with the initialization, which decreases with increasing network size (see FIG1).The closeness of learned weights to initialization in the over-parametrized setting can be understood by considering the limiting case as the number of hidden units go to infinity, as considered in and BID1. In this extreme setting, just training the top layer of the network, which is a convex optimization problem for convex losses, will in minimizing the training error, as the randomly initialized hidden layer has all possible features. Intuitively, the large number of hidden units here represent all possible features and hence the optimization problem involves just picking the right features that will minimize the training loss. This suggests that as we over-parametrize the networks, the optimization algorithms need to do less work in tuning the weights of the hidden units to find the right solution. indeed have numerically evaluated a PAC-Bayes measure from the initialization used by the algorithms and state that the Euclidean distance to the initialization is smaller than the Frobenius norm of the parameters. also make a similar empirical observation on the significant role of initialization, and in fact prove an initialization dependent generalization bound for linear networks. However they do not prove a similar generalization bound for neural networks. suggested a Fisher-Rao metric based complexity measure that correlates with generalization behavior in larger networks, but they also prove the capacity bound only for linear networks. Contributions: Our contributions in this paper are as follows.• We empirically investigate the role of over-parametrization in generalization of neural networks on 3 different datasets (MNIST, CIFAR10 and SVHN), and show that the existing complexity measures increase with the number of hidden units -hence do not explain the generalization behavior with over-parametrization.• We prove tighter generalization bounds (Theorem 2) for two layer ReLU networks, improving over previous . Our proposed complexity measure for neural networks decreases with the increasing number of hidden units, in the experimental range considered (see Section 2), and can potentially explain the effect of over-parametrization on generalization of neural networks.• We provide a matching lower bound for the Rademacher complexity of two layer ReLU networks with a scalar output. Our lower bound considerably improves over the best known bound given in , and to our knowledge is the first such lower bound that is bigger than the Lipschitz constant of the network class. We consider two layer fully connected ReLU networks with input dimension d, output dimension c, and the number of hidden units h. Output of a network is DISPLAYFORM0 h×d and V ∈ R c×h. We denote the incoming weights to the hidden unit i by u i and the outgoing weights from hidden unit i by v i. Therefore u i corresponds to row i of matrix U and v i corresponds to the column i of matrix V.We consider the c-class classification task where the label with maximum output score will be selected as the prediction. , we define the margin operator µ: R c × [c] → R as a function that given the scores f (x) ∈ R c for each label and the correct label y ∈ [c], it returns the difference between the score of the correct label and the maximum score among other labels, i.e. DISPLAYFORM1. We now define the ramp loss as follows: DISPLAYFORM2 For any distribution D and margin γ > 0, we define the expected margin loss of a predictor f as DISPLAYFORM3 The loss L γ defined this way is bounded between 0 and 1. We useL γ (f) to denote the empirical estimate of the above expected margin loss. As setting γ = 0 reduces the above to classification loss, we will use L 0 (f) andL 0 (f) to refer to the expected risk and the training error respectively. For any function class F, let γ • F denote the function class corresponding to the composition of the loss function and functions from class F. With probability 1 − δ over the choice of the training set of size m, the following generalization bound holds for any function f ∈ F (, Theorem 3.1): DISPLAYFORM0 where R S (H) is the Rademacher complexity of a class H of functions with respect to the training set S which is defined as: Rademacher complexity is a capacity measure that captures the ability of functions in a function class to fit random labels which increases with the complexity of the class. DISPLAYFORM1 We will bound the Rademacher complexity of neural networks to get a bound on the generalization error. Since the Rademacher complexity depends on the function class considered, we need to choose the right function class that only captures the real trained networks, which is potentially much smaller than networks with all possible weights, to get a complexity measure that explains the decrease in generalization error with increasing width. Choosing a bigger function class can in weaker capacity bounds that do not capture this phenomenon. Towards that we first investigate the behavior of different measures of network layers with increasing number of hidden units. The experiments discussed below are done on the CIFAR-10 dataset. Please see Section A for similar observations on SVHN and MNIST datasets. First layer: As we see in the second panel in FIG1 even though the spectral and Frobenius norms of the learned layer decrease initially, they eventually increase with h, with Frobenius norm increasing at a faster rate. However the distance Frobenius norm, measured w.r.t. initialization (U − U 0 F), decreases. This suggests that the increase in the Frobenius norm of the weights in larger networks is due to the increase in the Frobenius norm of the random initialization. To understand this behavior in more detail we also plot the distance to initialization per unit and the distribution of angles between learned weights and initial weights in the last two panels of FIG1. We indeed observe that per unit distance to initialization decreases with increasing h, and a significant shift in the distribution of angles to initial points, from being almost orthogonal in small networks to almost aligned in large networks. This per unit distance to initialization is a key quantity that appears in our capacity bounds and we refer to it as unit capacity in the remainder of the paper. Unit capacity. We define β i = u i − u 0 i 2 as the unit capacity of the hidden unit i. Second layer: Similar to first layer, we look at the behavior of different measures of the second layer of the trained networks with increasing h in the first panel of FIG1. Here, unlike the first layer, we notice that Frobenius norm and distance to initialization both decrease and are quite close suggesting a limited role of initialization for this layer. Moreover, as the size grows, since the Frobenius norm V F of the second layer slightly decreases, we can argue that the norm of outgoing weights v i from a hidden unit i decreases with a rate faster than 1/ √ h. If we think of each hidden unit as a linear separator and the top layer as an ensemble over classifiers, this means the impact of each classifier on the final decision is shrinking with a rate faster than 1/ √ h. This per unit measure again plays an important role and we define it as unit impact for the remainder of this paper. Unit impact. We define α i = v i 2 as the unit impact, which is the magnitude of the outgoing weights from the unit i. Motivated by our empirical observations we consider the following class of two layer neural networks that depend on the capacity and impact of the hidden units of a network. Let W be the following restricted set of parameters: DISPLAYFORM0 We now consider the hypothesis class of neural networks represented using parameters in the set W: DISPLAYFORM1 Our empirical observations indicate that networks we learn from real data have bounded unit capacity and unit impact and therefore studying the generalization behavior of the above function class can potentially provide us with a better understanding of these networks. Given the above function class, we will now study its generalization properties. In this section we prove a generalization bound for two layer ReLU networks. We first bound the Rademacher complexity of the class F W in terms of the sum over hidden units of the product of unit capacity and unit impact. Combining this with the equation FORMULA4 will give us the generalization bound. Theorem 1. Given a training set S = {x i} m i=1 and γ > 0, Rademacher complexity of the composition of loss function γ over the class F W defined in equations FORMULA6 and is bounded as follows: DISPLAYFORM0 The proof is given in the supplementary Section C. The main idea behind the proof is a new technique to decompose the complexity of the network into complexity of the hidden units. To our knowledge, all previous works decompose the complexity to that of layers and use Lipschitz property of the network to bound the generalization error. However, Lipschitzness of the layer is a rather weak property that ignores the linear structure of each individual layer. Instead, by decomposing the complexity across the hidden units, we get the above tighter bound on the Rademacher complexity of the two layer neural networks. The generalization bound in Theorem 1 is for any function in the function class defined by a specific choice of α and β fixed before the training procedure. To get a generalization bound that holds for all networks, it suffices to cover the space of possible values for α and β and take a union bound over it. The following theorem states the generalization bound for any two layer ReLU network 2.Theorem 2. For any h ≥ 2, γ > 0, δ ∈ and U 0 ∈ R h×d, with probability 1 − δ over the choice of the training set DISPLAYFORM1 and U ∈ R h×d, the generalization error is bounded as follows: DISPLAYFORM2 The above generalization bound empirically improves over the existing bounds, and decreases with increasing width for networks learned in practice (see Section 2.3). We also show an explicit lower bound for the Rademacher complexity (Theorem 3), matching the first term in the above generalization bound, thereby showing its tightness. The additive factorÕ(h/m) in the above bound is the of taking the union bound over the cover of α and β. As we see in FIG6, in the regimes of interest this additive term is small and does not dominate the first term, ing in an overall decrease in capacity with over-parametrization. In Appendix Section B, we further extend the generalization bound in Theorem 2 to p norms, presenting a finer tradeoff between the two terms. In TAB1 with respect to the size of the network trained on CIFAR-10. BID1 ): The first term in their bound U 2 V − V 0 1,2 is of smaller magnitude and behaves roughly similar to the first term in our bound U 0 2 V F (see FIG2 last two panels). The key complexity term in their bound is U − U 0 1,2 V 2, and in our bound is U − U 0 F V F, for the range of h considered. V 2 and V F differ by number of classes, a small constant, and hence behave similarly. However, U − U 0 1,2 can be as big as DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 when most hidden units have similar capacity. Infact their bound increases with h mainly because of this term U − U 0 1,2. As we see in the first and second panels of Experimental comparison. We train two layer ReLU networks of size h on CIFAR-10 and SVHN datasets with values of h ranging from 2 6 to 2 15. The training and test error for CIFAR-10 are shown in the first panel of FIG0, and for SVHN in the left panel of FIG4. We observe for both datasets that even though a network of size 128 is enough to get to zero training error, networks with sizes well beyond 128 can still get better generalization, even when trained without any regularization. We further measure the unit-wise properties introduce in the paper, namely unit capacity and unit impact. These quantities decrease with increasing h, and are reported in the right panel of FIG0 and second panel of FIG4. Also notice that the number of epochs required for each network size to get 0.01 cross-entropy loss decreases for larger networks as shown in the third panel of FIG4.For the same experimental setup, FIG6 compares the behavior of different capacity bounds over networks of increasing sizes. Generalization bounds typically scale as C/m where C is the effective capacity of the function class. The left panel reports the effective capacity C based on different measures calculated with all the terms and constants. We can see that our bound is the only that decreases with h and is consistently lower that other norm-based data-independent bounds. Our bound even improves over VC-dimension for networks with size larger than 1024. While the actual numerical values are very loose, we believe they are useful tools to understand the relative generalization behavior with respect to different complexity measures, and in many cases applying a set of data-dependent techniques, one can improve the numerical values of these bounds significantly (; BID0 each capacity bound normalized by its maximum in the range of the study for networks trained on CIFAR-10 and SVHN respectively. For both datasets, our capacity bound is the only one that decreases with the size even for networks with about 100 million parameters. All other existing norm-based bounds initially decrease for smaller networks but then increase significantly for larger networks. Our capacity bound therefore could potentially point to the right properties that allow the over-parametrized networks to generalize. Finally we check the behavior of our complexity measure under a different setting where we compare this measure between networks trained on real and random labels . We plot the distribution of margin normalized by our measure, computed on networks trained with true and random labels in the last panel of FIG4 -and as expected they correlate well with the generalization behavior. In this section we will prove a lower bound for the Rademacher complexity of neural networks, that matches the dominant term in the upper bound of Theorem 1. We will show our lower bound on a smaller function class than F W, with an additional constraint on spectral norm of the hidden layer. This allows for comparison with the existing , and also extends the lower bound to the bigger class F W. Theorem 3. Define the parameter set DISPLAYFORM0 and let F W be the function class defined on W by equation. Then, for any DISPLAYFORM1 Clearly, W ⊆ W, since it has an extra constraint. The complete proof is given in the supplementary Section C.3.Published as a conference paper at ICLR 2019The above complexity lower bound matches the first term, DISPLAYFORM2, in the upper bound of Theorem 1, upto 1 γ, which comes from the 1 γ -Lipschitz constant of the ramp loss l γ. To match the second term in the upper bound for Theorem 1, consider the setting with c = 1 and β = 0, ing in, DISPLAYFORM3 where DISPLAYFORM4 In other words, when β = 0, the function class DISPLAYFORM5, and therefore we have the above lower bound, showing that the upper bound provided in Theorem 1 is tight. It also indicates that even if we have more information, such as bounded spectral norm with respect to the reference matrix is small (which effectively bounds the Lipschitz of the network), we still cannot improve our upper bound. To our knowledge, all the previous capacity lower bounds for spectral norm bounded classes of neural networks with a scalar output and element-wise activation functions correspond to the Lipschitz constant of the network. Our lower bound strictly improves over this, and shows a gap between the Lipschitz constant of the network (which can be achieved by even linear models), and the capacity of neural networks. This lower bound is non-trivial, in the sense that the smaller function class excludes the neural networks with all rank-1 matrices as weights, and thus shows a Θ(√ h)-capacity gap between the neural networks with ReLU activations and linear networks. The lower bound therefore does not hold for linear networks. Finally, one can extend the construction in this bound to more layers by setting all the weight matrices in the intermediate layers to be the Identity matrix. for the function class defined by the parameter set: DISPLAYFORM6 Note that s 1 s 2 is the Lipschitz bound of the function class F Wspec. Given W spec with bounds s 1 and s 2, choosing α and β such that α 2 = s 1 and max i∈[h] β i = s 2 in W ⊂ W spec. Hence we get the following from Theorem 3, showing a stronger lower bound for this function class as well. DISPLAYFORM7 Hence our improves the lower bound in by a factor of √ h. Theorem 7 in also gives a Ω(s 1 s 2 √ c) lower bound, c is the number of outputs of the network, for the composition of 1-Lipschitz loss function and neural networks with bounded spectral norm, or ∞-Schatten norm. Our above even improves on this lower bound. In this paper we present a new capacity bound for neural networks that empirically decreases with the increasing number of hidden units, and could potentially explain the better generalization performance of larger networks. In particular, we focused on understanding the role of width in the generalization behavior of two layer networks. More generally, understanding the role of depth and the interplay between depth and width in controlling capacity of networks, remain interesting directions for future study. We also provided a matching lower bound for the capacity improving on the current lower bounds for neural networks. While these bounds are useful for relative comparison between networks of different size, their absolute values still remain larger than the number of training samples, and it is of interest to get bounds with numerically smaller values. In this paper we do not address the question of whether optimization algorithms converge to low complexity networks in the function class considered in this paper, or in general how does different hyper parameter choices affect the complexity of the recovered solutions. It is interesting to understand the implicit regularization effects of the optimization algorithms (a; ;) for neural networks, which we leave for future work. Below we describe the setting for each reported experiment. In this experiment, we trained a pre-activation ResNet18 architecture on CIFAR-10 dataset. The architecture consists of a convolution layer followed by 8 residual blocks (each of which consist of two convolution) and a linear layer on the top. Let k be the number of channels in the first convolution layer. The number of output channels and strides in residual blocks is then [k, k, 2k, 2k, 4k, 4k, 8k, 8k] and respectively. Finally, we use the kernel sizes 3 in all convolutional layers. We train 11 architectures where for architecture i we set k = 2 2+i/2. In each experiment we train using SGD with mini-batch size 64, momentum 0.9 and initial learning rate 0.1 where we reduce the learning rate to 0.01 when the cross-entropy loss reaches 0.01 and stop when the loss reaches 0.001 or if the number of epochs reaches 1000. We use the reference line in the plots to differentiate the architectures that achieved 0.001 loss. We do not use weight decay or dropout but perform data augmentation by random horizontal flip of the image and random crop of size 28 × 28 followed by zero padding. We trained fully connected feedforward networks on CIFAR-10, SVHN and MNIST datasets. For each data set, we trained 13 architectures with sizes from 2 3 to 2 15 each time increasing the number of hidden units by factor 2. For each experiment, we trained the network using SGD with mini-batch size 64, momentum 0.9 and fixed step size 0.01 for MNIST and 0.001 for CIFAR-10 and SVHN. We did not use weight decay, dropout or batch normalization. For experiment, we stopped the training when the cross-entropy reached 0.01 or when the number of epochs reached 1000. We use the reference line in the plots to differentiate the architectures that achieved 0.01 loss. Evaluations For each generalization bound, we have calculated the exact bound including the logterms and constants. We set the margin to 5th percentile of the margin of data points. Since bounds in BID2 and Neyshabur et al. (2015c) are given for binary classification, we multiplied BID2 by factor c and Neyshabur et al. (2015c) by factor √ c to make sure that the bound increases linearly with the number of classes (assuming that all output units have the same norm). Furthermore, since the reference matrices can be used in the bounds given in and BID0, we used random initialization as the reference matrix. When plotting distributions, we estimate the distribution using standard Gaussian kernel density estimation. Figures 6 and 7 show the behavior of several measures on networks with different sizes trained on SVHN and MNIST datasets respectively. The left panel of FIG10 shows the over-parametrization phenomenon in MNSIT dataset and the middle and right panels compare our generalization bound to others. In this section we generalize the Theorem 2 to p norm. The main new ingredient in the proof is the Lemma 11, in which we construct a cover for the p ball with entry-wise dominance. Theorem 5. For any h, p ≥ 2, γ > 0, δ ∈ and U 0 ∈ R h×d, with probability 1 − δ over the choice of the training set DISPLAYFORM0 and U ∈ R h×d, the generalization error is bounded as follows: DISPLAYFORM1 where. p,2 is the p norm of the row 2 norms. For p of order ln h, h e −p ≈ constant improves on the √ h additive term in Theorem 2 and DISPLAYFORM2 which is a tight upper bound for V F and is of the same order if all rows of V have the same norm -hence giving a tighter bound that decreases with h for larger values. In particular for p = ln h we get the following bound. Corollary 6. Under the settings of Theorem 5, with probability 1 − δ over the choice of the training set S = {x i} m i=1, for any function f (x) = V[Ux] +, the generalization error is bounded as follows: DISPLAYFORM3 We start by stating a simple lemma which is a vector-contraction inequality for Rademacher complexities and relates the norm of a vector to the expected magnitude of its inner product with a vector of Rademacher random variables. We use the following technical from in our proof. Lemma 7 (Propostion 6 of Maurer FORMULA2). Let ξ i be the Rademacher random variables. For any vector v ∈ R d, the following holds: DISPLAYFORM4 The above DISPLAYFORM5 Proof. DISPLAYFORM6 (i) follows from the Jensen's inequality. We next show that the Rademacher complexity of the class of networks defined in and FORMULA6 can be decomposed to that of hidden units. Lemma 9 (Rademacher Decomposition). Given a training set S = {x i} m i=1 and γ > 0, Rademacher complexity of the class F W defined in equations and is bounded as follows: DISPLAYFORM7 Proof. We prove the inequality in the lemma statement using induction on t in the following inequality: DISPLAYFORM8 where for simplicity of the notation, we let φ DISPLAYFORM9 The above statement holds trivially for the base case of t = 1 by the definition of the Rademacher complexity. We now assume that it is true for any t ≤ t and prove it is true for t = t + 1. DISPLAYFORM10 The last inequality follows from the √ 2 γ Lipschitzness of the ramp loss. The ramp loss is 1/γ Lipschitz with respect to each dimension but since the loss at each point only depends on score of the correct labels and the maximum score among other labels, it is √ 2 γ -Lipschitz. By Lemma 7, the right hand side of the above inequality can be bounded as follows: DISPLAYFORM11 This completes the induction proof. Lemma 10 (Ledoux-Talagrand contraction,). Let f: R + → R + be convex and increasing. Let φ i: R → R satisfy φ i = 0 and be L-Lipschitz. Let ξ i be independent Rademacher random variables. For any T ⊆ R n, DISPLAYFORM12 The above lemma will be used in the following proof of Theorem 1.Proof of Theorem 1. By Lemma 9, we have: DISPLAYFORM13 Now we can apply Lemma 10 with n = m × c, f DISPLAYFORM14, and we get DISPLAYFORM15 The proof is completed by taking sum of above inequality over j from 1 to h. We start by the following covering lemma which allows us to prove the generalization bound in Theorem 5 without assuming the knowledge of the norms of the network parameters. The following lemma shows how to cover an p ball with a set that dominates the elements entry-wise, and bounds the size of a one such cover. Lemma 11 (p covering lemma). Given any, D, β > 0, p ≥ 2, consider the set S DISPLAYFORM0 Proof. We prove the lemma by construction. DISPLAYFORM1 By Lemma 11, picking = ((1 + µ) 1/p − 1), we can find a set of vectors, DISPLAYFORM2 Lemma 13. For any h, p ≥ 2, c, d, γ, µ > 0, δ ∈ and U 0 ∈ R h×d, with probability 1 − δ over the choice of the training set DISPLAYFORM3 c×h and U ∈ R h×d, the generalization error is bounded as follows: DISPLAYFORM4 where DISPLAYFORM5 and. p,2 is the p norm of the column 2 norms. Proof. This lemma can be proved by directly applying union bound on Lemma 12 with for every C 1 ∈. For V p,2 ≤ 1 h 1/2−1/p, we can use the bound where C 1 = 1, and the additional constant 1 in Eq. 12 will cover that. The same is true for the case of U p,2 ≤ i h 1/2−1/p X F. When any of h 1/2−1/p V p,2 and h 1/2−1/p X F U p,2 is larger than DISPLAYFORM6, the second term in Eq. 12 is larger than 1 thus holds trivially. For the rest of the case, there exists (C 1, C 2) such that h 1/2−1/p C 1 ≤ h 1/2−1/p V p,2 + 1 and h 1/2−1/p C 2 ≤ h 1/2−1/p X F X F U p,2 + 1. Finally, we have γ √ m 4 ≥ 1 otherwise the second term in Eq. 12 is larger than 1. Therefore, DISPLAYFORM7 We next use the general in Lemma 13 to give specific for the case p = 2.Lemma 14. For any h ≥ 2, c, d, γ > 0, δ ∈ and U 0 ∈ R h×d, with probability 1 − δ over the choice of the training set S = {x i} m i=1 ⊂ R d, for any function f (x) = V[Ux] + such that V ∈ R c×h and U ∈ R h×d, the generalization error is bounded as follows: DISPLAYFORM8 Proof. To prove the lemma, we directly upper bound the generalization bound given in Lemma 13 for p = 2 and µ = 3 √ 2 4 − 1. For this choice of µ and p, we have 4(µ + 1) 2/p ≤ 3 √ 2 and ln N p,h is bounded as follows:ln N p,h = ln h/µ + h − 2 h − 1 ≤ ln e h/µ + h − 2 h − 1 h−1 = (h − 1) ln e + e h/µ − 1 h − 1 ≤ (h − 1) ln e + e h/µ h − 1 ≤ h ln(e + 2e/µ) ≤ 5hProof of Theorem 2. The proof directly follows from Lemma 14 and usingÕ notation to hide the constants and logarithmic factors. Next lemma states a generalization bound for any p ≥ 2, which is looser than 14 for p = 2 due to extra constants and logarithmic factors. Lemma 15. For any h, p ≥ 2, c, d, γ > 0, δ ∈ and U 0 ∈ R h×d, with probability 1 − δ over the choice of the training set S = {x i} m i=1 ⊂ R d, for any function f (x) = V[Ux] + such that V ∈ R c×h and U ∈ R h×d, the generalization error is bounded as follows: DISPLAYFORM9. p,2 is the p norm of the column 2 norms. Proof. To prove the lemma, we directly upper bound the generalization bound given in Lemma 13 for µ = e p − 1. For this choice of µ and p, we have (µ + 1) 2/p = e 2. Furthermore, if µ ≥ h, N p,h = 0, otherwise ln N p,h is bounded as follows: DISPLAYFORM10 = (h/(e p − 1) − 1) ln e + e h − 1 h/(e p − 1) − 1 ≤ (e 1−p h − 1) ln (eh)Since the right hand side of the above inequality is greater than zero for µ ≥ h, it is true for every µ > 0.Proof of Theorem 5. The proof directly follows from Lemma 15 and usingÕ notation to hide the constants and logarithmic factors. Proof of Theorem 3. We will start with the case h = d = 2 k, m = n2 k for some k, n ∈ N.We will pick V = α = [α 1 . . . α 2 k] for every ξ, and DISPLAYFORM0, where x i:= e i n. That is, the whole dataset are divides into 2 k groups, while each group has n copies of a different element in standard orthonormal basis. We further define j (ξ) = For any ξ ∈ {−1, 1} n, let Diag(β) be the square diagonal matrix with its diagonal equal to β and F(ξ) be the following: F(ξ):= [f 1, f 2, . . ., f 2 k] such that if i (ξ) ≥ 0, f i = f i, and if i (ξ) < 0, f i = 0, and we will choose U(ξ) as Diag(β) × F(ξ).Since F is orthonormal, by the definition of F(ξ), we have F(ξ) 2 ≤ 1 and the 2-norm of each row of F is upper bounded by 1. Therefore, we have U(ξ) 2 ≤ Diag(β) 2 F(ξ) 2 ≤ max i β i, and
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BygfghAcYX
We suggest a generalization bound that could partly explain the improvement in generalization with over-parametrization.
We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks. The novel processing blocks that facilitate efficient information flow are a convolution-type operation block for point sets that blends neighborhood information in a memory-efficient manner; a multi-resolution point cloud processing block; and a crosslink block that efficiently shares information across low- and high-resolution processing branches. Combining these blocks, we design significantly wider and deeper architectures. We extensively evaluate the proposed architectures on multiple point segmentation benchmarks (ShapeNetPart, ScanNet, PartNet) and report systematic improvements in terms of both accuracy and memory consumption by using our generic modules in conjunction with multiple recent architectures (PointNet++, DGCNN, SpiderCNN, PointCNN). We report a 9.7% increase in IoU on the PartNet dataset, which is the most complex, while decreasing memory footprint by 57%. Figure 1: (Left) Memory footprint and inference speed of network variations: our multiresolution (mRes), and crosslink (X) blocks decrease the memory footprint, while our convolutiontype block (conv) decreases both memory consumption (-67%) and inference time (-41%) compared to the PointNet++ (PN++) baseline. (Right) Improvements in accuracy for three segmentation benchmarks of increasing complexity. On the -most complex-PartNet dataset our deep network outperforms the shallow PointNet++ baseline by 3.4%(spread of +0.6), yielding a 9.7% (spread of +3.4) relative increase. Geometry processing has recently started profiting from applying deep learning to graphics and 3D shape analysis (b; b; with networks that guarantee desirable properties of point cloud processing, such as permutation-invariance and quantization-free representation ; . Despite these advances, several differences still impede the breakthroughs made in computer vision. The different nature of 3D data dictates re-inventing for geometry processing the functionality of basic image processing blocks, such as multi-resolution processing or convolution operations. When operating with unstructured point clouds, one has to resort to elementary local pooling operations that group information within a neighborhood based on Euclidean distance. Exemplary methods such as the PointNet/PointNet++ architectures (a; make design choices that potentially compromise performance. In particular, the computation and memory demands of point network blocks can affect both training speed and, more crucially, inference time. One of the main bottlenecks for point networks is their memory-intensive nature: as detailed in Sec. 3.1, the PointNet++ architecture and its variants replicate point neighborhood information, letting every node carry in its feature vector information about all of its neighborhood. This in significant memory overhead, and limits the number of layers, features and feature compositions one can compute. In this work, we enhance point processing networks by introducing a set of modules that improve memory footprint and accuracy, without compromising on inference speed. We call the architectures Lean Point Networks, to highlight their lightweight memory budget. We build on the decreased memory budget to go deeper with point networks. As has been witnessed repeatedly in the image domain ;), we show that going deep also increases the prediction accuracy of point networks. We start in Sec. 3.2 by replacing the grouping operation used in point cloud processing networks with a low-memory alternative that is the point cloud processing counterpart of efficient image processing implementations of convolution. The ing'point convolution block' is 67% more memory-efficient and 41% faster than its PointNet++ counterpart, while exhibiting favorable training properties due to more effective mixing of information across neighborhoods. We then turn in Sec. 3.3 to improving the information flow across layers and scales within point networks through three techniques: a multi-resolution variant for multi-scale network which still delivers the multi-scale context but at a reduced memory and computational cost, residual links, and a new cross-link block that broadcasts multi-scale information across the network branches. By combining these advances we are able to successfully train deeper point networks that allow us to leverage upon larger, recently introduced datasets. In Sec. 4 we thoroughly validate our contributions on the ShapeNet-Part, ScanNet and PartNet segmentation benchmarks, reporting systematic improvements over the PointNet++ baseline. As shown in Fig. 1, when combined these contributions deliver multifold reductions in memory consumption while improving performance, allowing us in a second stage to train increasingly wide and deep networks. On PartNet, the most complex dataset, our deep architecture achieves a 9.7% relative increase in IoU while decreasing memory footprint by 57% and inference time by 47%. Having thoroughly ablated our design choices on the PartNet++ baseline, in Sec. 4.3 we turn to confirming the generic nature of our blocks. We extend the scope of our experiments to three additional networks, (i) DGCNN (b), (ii) SpiderCNN and (iii) PointCNN (b) and report systematic improvements in memory efficiency and performance. Learning in Point Clouds. Learning-based approaches have recently attracted significant attention in the context of Geometric Data Analysis, with several methods proposed specifically to handle point cloud data, including PointNet (a) and several extensions such as PointNet++ (b) and Dynamic Graph CNNs (b) for shape segmentation and classification, PCPNet for normal and curvature estimation, P2P-Net and PU-Net (b) for cross-domain point cloud transformation. Although many alternatives to PointNet have been proposed a;; ) to achieve higher performance, the simplicity and effectiveness of PointNet and its extension PointNet++ make it popular for many other tasks (a). Taking PointNet++ as our starting point, our work facilitates the transfer of network design techniques developed in computer vision to point cloud processing. In particular, significant accuracy improvements have been obtained with respect to the original AlexNet network by engineering the scale of the filtering operations , the structure of the computational blocks , and the network's width and depth ). A catalyst for experimenting with a larger space of network architecture, however, is the reduction of memory consumption -this motivated us to design lean alternatives to point processing networks. introduce a new operator to improve point cloud network efficiency, but only focus on increasing the convergence speed by tuning the receptive field. has investigated how residual/dense connections and dilated convolution could help mitigate vanishing gradient observed for deep graph convolution networks but without solving memory limitations. By contrast our work explicitly tackles the memory problem with the objective of training deeper/wider networks and shows that there are clear improvements over strong baselines. Memory-Efficient Networks. The memory complexity of the standard back-propagation implementation grows linearly with network's depth as backprop requires retaining in memory all of the intermediate activations computed during the forward pass, since they are required for the gradient computation in the backward pass. Several methods bypass this problem by trading off speed with memory. Checkpointing techniques use anchor points to free up intermediate computation , and re-compute them in the backward pass. This is 1.5x slower during training, since one performs effectively two forward passes rather than just one. More importantly, applying this technique is easy for chain-structured graphs, e.g., recursive networks but is not as easy for general Directed Acyclic Graphs, such as U-Nets, or multi-scale networks like PointNet++. One needs to manually identify the graph components, making it cumbersome to experiment with diverse variations of architectures. Reversible Residual Networks (RevNets) limit the computational block to come in a particular, invertible form of residual network. This is also 1.5x slower during training, but alleviates the need for anchor points altogether. Unfortunately, it is unclear what is the point cloud counterpart of invertible blocks. We propose generic blocks to reduce the memory footprint inspired from multi-resolution processing and efficient implementations of the convolution operation in computer vision. As we show in Sec. 4.3, our blocks can be used as drop-in replacements in generic point processing architectures (PointNet++, DGCNN, SpiderNet, PointCNN) without any additional network design effort. This Section introduces a set of modular blocks, shown in Fig. 2, that can be applied to most state-ofthe-art point networks. We start with a brief introduction of the PointNet++ network, which serves as an example point network baseline. We then introduce our modules and explain how their design decreases memory footprint and improves information flow. PointNet++ (b) builds on top of PointNet. First, each point p i looks up its k-nearest neighbors and stacks them to get a point set, say P i N k. Then, PointNet is applied to each such point set P i N k and the ant feature vector is assigned back to the corresponding point p i. While demonstrated to be extremely effective, PointNet++ has two main shortcomings: first, because of explictly carrying around k-nearest neighbor information for each point, the network layers are memory intensive; and second, being reliant on PointNet, it also delays transmission of global information until the last stage. As shown in Fig. 2(a), the existing PointNet++ grouping operation exposes the neighborhood of any point i by concatenating all of its K neighboring D-dimensional vectors v [i,k] to form a tensor: Every vector of this matrix is processed separately by a Multi-Layer-Perceptron that implements a function MLP: R D → R D, while at a later point a max-pooling operation over the K neighbors of every point delivers a slim, N × D matrix. When training a network every layer constructs and retains such a matrix in memory, so that it can be used in the backward pass to update the MLP parameters, and send gradients to earlier layers. The counterpart for a standard 2D image convolution amounts to forming a K 2 tensor in memory when performing K × K filtering and then implementing a convolution as matrix multiplication. This amounts to the im2col operation used for example in the caffe library to implement convo- The standard PN++ layer in (a) amounts to the composition of a neighborhood-based lookup and a PointNet element. In (b) we propose to combine parallel PointNet++ blocks in a multiresolution architecture, and in (c) allow information to flow across branches of different resolutions through a cross-link element. In (d) we propose to turn the lookup-SLP-pooling cascade into a low-memory counterpart by removing the kNN elements from memory once computed; we also introduce residual links, improving the gradient flow. In (e) we stack the block in (d) to grow in depth and build our deep architecture. Each of these tweaks to the original architecture allows for systematic gains in memory and computational efficiency. The green box indicates that the block can be grown in depth by stacking those green units. lutions with General Matrix-Matrix Multiplication (GEMM) . In point clouds the nearest neighbor information provides us with the counterpart to the K ×K neighborhood. Based on this observation we propose to use the same strategy as the one used in memory-efficient implementations of image convolutions for deep learning. In particular we free the memory as soon as the forward pass computes its output, rather than maintaining the matrix in memory. In the backward pass we reconstruct the matrix on the fly from the outputs of the previous layer. We perform the required gradient computations and then return the GPU memory resources (we refer to Algorithm 1 and Algorithm 2 in Appendix B for a detailed description). Using the on-the-fly re-computation of the tensor T has a positive impact on both the forward and backward pass. Instead of applying the SLP on the neighbourhood feature matrix, we can first apply the SLP on the flat feature matrix and then reconstruct the neighbourhood just before the max-pooling layer. The same can be used for the backward pass. In our unoptimized code, our convolution-type architecture shortens the time spent for the forward pass and the backward pass by 41% and 68% respectively on average. For a network with L layers, the memory consumption of the baseline PointNet++ layer grows as L×(N ×D×K), while in our case memory consumption grows as L×(N ×D)+(N ×D×K), where the term, L × (N × D) accounts for the memory required to store the layer activations, while the second term N ×D ×K is the per-layer memory consumption of a single neighborhood convolution layer. As L grows larger, this in a K-fold drop, shown on Fig. 3. This reduction opens up the possibility of learning much deeper networks, since memory demands now grow substantially more slowly in depth. With minor, dataset-dependent, fluctuations, the memory footprint of our convolution type architecture is on average 67% lower than the PointNet++ baseline, while doubling the number of layers comes with a memory overhead of 2.7%. Figure 3: Evolution of memory consumption as the number of layers increases for PointNet++ and convPN (convolution block counterpart) on ShapeNet-Part. Doubling the number of layers for convPN only in an increase in memory by +2.3% and +16.8% for mid-and high-resolution respectively, which favorably compares to the +72% and +125% increases for PointNet++. We now turn to methods that allow for a more efficient propagation of information through the network. As has been repeatedly shown in computer vision, this can drastically impact the behavior of the network during training. Our experiments indicate that this is also the case for point processing. Due to lack of space, the illustrations clarifying the operation of these blocks are only available in the Appendix. a-Multi-Resolution vs Multi-Scale Processing. Shape features can benefit from both local, finegrained information and global, semantic-level context; their fusion can easily boost the discriminative power of the ing features. As presented in Sec. 3.1, PointNet++ mostly uses a neighborhood of fixed radius, leaving only to the later blocks of the network the use of bigger radii and point set sampling. Hence, at early stages of the network, the points only has access to very local information. We observe that this allows only very slow exchange of information among low-, mid-and coarsescale information. Coarse-scale information is conveyed not necessarily by all points that are contained within a larger radius, but by obtaining potentially sparse measurements from a larger area. This underlies also the common log-polar sampling mechanism in computer and biological vision (; ;) where a constant number of measurements is obtained in concentric disks of geometrically increasing radii. We therefore propose to extract neighborhoods of fixed size in downsampled versions of the original point cloud. In the coordinates of the original point cloud this amounts to increasing the effective grouping area, but it now comes with a much smaller memory budget. We observe a 58% decrease in memory footprint on average on the three tested datasets. Please refer to Fig. 4 in Appendix A.1.2 for an illustration of the difference between both types of processing. b-Residual Links. We use the standard Residual Network architecture, which helps to train deep networks reliably. Residual networks change the network's connectivity to improve gradient flow during training: identity connections provide early network layers with access to undistorted versions of the loss gradient, effectively mitigating the vanishing gradient problem. As our in Sec. 4 show, this allows us to train deeper networks. c-Cross Links. We further introduce Cross-Resolution Links in order to better propagate information in the network during training. We draw inspiration from the Multi-Grid Networks and the Multiresolution Tree Networks and allow layers that reside in different resolution branches to communicate with each other, thereby exchanging low-, mid-, and high-resolution information throughout the network processing, rather than fusing multi-resolution information at the end of each block. Cross-links broadcast information across resolutions as shown in Fig. 5 in Appendix A.1: unlike , an MLP transforms the output of one branch to the right output dimensionality so that it can be combined with the output of another branch. Each resolution branch can focus on its own representation and the MLPs will be in charge of making the translation between them. Taking in particular the case of a high-resolution branch communicating its outputs to a mid-resolution branch, we have N × D H feature vectors at the output of a lookup-SLP-pooling block cascade, which need to be communicated to the N/2 × D M vectors of the mid-resolution branch. We first downsample the points, going from N to N/2 points, and then use an MLP that transforms the vectors to the target dimensionality. Conversely, when going from low-to higher dimensions we first transform the points to the right dimensionality and then upsample them. We have experimented with both concatenating and summing multi-resolution features and have observed that summation behaves systematically better in terms of both training speed and test performance. Dataset and evaluation measures. We evaluate our modules on the point cloud segmentation task on three different datasets. The datasets consist of either 3D CAD models or real-world scans. We quantify the complexity of each dataset based on (i) the number of training samples, (ii) the homogeneity of the samples and (iii) the granularity of the segmentation task. Note that a network trained on a bigger and diverse dataset would be less prone to overfitting -as such we can draw more informative from more complex datasets. We order the datasets by increasing complexity: ShapeNet-Part , ScanNet and PartNet for fine-grained segmentation. By its size (24,506 samples) and its granularity (251 labeled parts), PartNet is the most complex dataset we have experimented on. In order to stay consistent with reported benchmarks on each dataset, we use two different metrics to report the Intersection over Union (IoU): (i) the mean Intersection over Union (mIoU) and (ii) the part Intersection over Union (pIoU). Please refer to Appendix C.2 for further explanation about both metrics. We report the performance of our variations for PointNet++ on the Shapenet-Part, ScanNet and PartNet datasets (Table 1). Our lean and deep architectures can be easily deployed on large and complex datasets. Hence, for PartNet, we choose to train on the full dataset all at once on a segmentation task across the 17 classes instead of having to train a separate network for each category in contrast to the original paper . Our architectures substantially improve the memory efficiency of the PointNet++ baseline while also delivering an increase in performance for more complex datasets (see Fig. 1). Indeed, as the data complexity grows, having efficient information flow has a larger influence on the network performance. On PartNet, the spread between our architectures and the vanilla PointNet++ becomes significantly high: our multiresolution (mRes) network increases relative performance by +5.7% over PointNet++ and this gain reaches +6.5% with cross-links (mResX). Our convolution-type network (convPN) outperforms other architectures when dataset complexity increases (+12.5% on ScanNet and +7.4% on PartNet) by more efficiently mixing information across neighbours. The aforementioned memory savings give the opportunity to design significantly deeper networks. Naively increasing network depth can harm performance; instead, we use residual connections to improve convergence for our deep network. The exact design of this architecture is more thoroughly detailed in Appendix A.1.4 but consists in doubling the number of layers in the encoding part. While keeping the impact on efficiency very small (+6.3% on inference time on average and +3.6% on memory consumption at most compared to the shallow convPN), the performance is improved by a significant margin as shown in Table 1. On PartNet, this margin reaches +2.1% over the shallow convPN and +9.7% over the vanilla PointNet++. We underline the extremely low growth of memory as a function of depth, shown in Fig. 3. In Table 2 we compare against Deep GCN in terms of the overall accuracy and the mean IoU on S3DIS dataset . We attain similar performance to the current state-of-the-art, while relying on our generic memory-efficient network blocks and while based on a weaker baseline compared to Deep GCN (i.e. DGCNN); as we show in the following section, these blocks comes with the advantage of being applicable to many SOTA point processing networks. We have introduced building blocks for point processing networks based on two key ideas, (i) a memory efficient convolution and (ii) a multi-resolution approach. Our blocks make it really efficient to capture, process and diffuse information in a point neighbourhood. Diffusing information across neighborhood is the main behavior that most networks, if not all, share. We validate the generality of the proposed modular blocks in the context of other state-of-the-art point-based learning setups, as shown in Table 3. Each of our macro-blocks can be stacked together, extended into a deeper block by duplicating the green boxes (see Figure 2) or even be modified by changing one of its components by another. We test our framework on three additional networks among the latest state-of-the-art approaches, (i) Dynamic Graph CNN (b), (ii) PointCNN (b) and (iii) SpiderCNN . The efficiency of our modules for the KPConv network is explored in Appendix C.4. These networks involve a diverse set of point convolution approaches and thus allows us to assess the generic nature of our modular blocks. All three of the networks make extensive use of memory which is a bottleneck to depth. We implant our modules directly in the original networks, making, when needed, some approximations from the initial architecture. We report the performance of each network with our lean counterpart on two metrics: (i) memory footprint and (ii) IoU in Table 3. Our lean counterparts consistently improve both the IoU (from +1.0% up to +8.0%) and the memory consumption (from -19% up to -69%). Our modular blocks can thus be applied to a wide range of state-of-the-art networks and improve significantly their memory consumption while having a positive impact on performance. In this section, we report our extensive experiments to assess the importance of each block of our network architectures. Our lean structure allows us to adjust the network architectures by increasing its complexity, either by (i) adding extra connections or by (ii) increasing the depth. We analyze our networks along four axes: (i) the performance measured in IoU (Table 1), (ii) the memory footprint, (iii) the inference time and (iv) the length of the backward pass. Our main experimental findings regarding network efficiency are reported in Table 4 and ablate the impact of our proposed design choices for point processing networks. Table 4: Efficiency of our network architectures measured with a batch size of 8 samples on a Nvidia GTX 2080Ti GPU. All of our lean architectures allow to save a substantial amount of memory on GPU wrt. the PointNet++ baseline from 58% with mRes to a 67% decrease with convPN. This latter convolution-type architecture wins on all counts, decreasing both inference time (-41%) and the length of backward pass (-68%) by a large spread. Starting form this architecture, the marginal cost of going deep is extremely low: doubling the number of layers in the encoding part of the network increases inference time by 6.3% on average and the memory consumption by only 3.6% at most compared to convPN. Please refer to Appendix C.3 for absolute values. Multi-Resolution: Processing different resolutions at the same stage of a network has been shown to perform well in shallow networks. Indeed, mixing information at different resolutions helps to capture complex features early in the network. We adopt that approach to design our mRes architecture. Switching from a PointNet++ architecture to a multi-resolution setting increases the IoU by 1.2% on ShapeNet-Part and 5.7% on PartNet. More crucially, this increase in performance come with more efficiency. Although the inference time is longer (18% longer on average) due to the extra downsampling and upsampling operations, the architecture is much leaner and reduces by 58% the memory footprint. The training time is quicker due to a 62% faster backward pass. Cross-links: Information streams at different resolutions are processed separately and can be seen as complementary. To leverage this synergy, the network is provided with additional links connecting neighborhood resolutions. We experiment on the impact of those cross-resolution links to check their impact on the optimization. At the price of a small impact on memory efficiency (+8% wrt. mRes) and speed (+7% on inference time wrt. mRes), the performance can be improved on PartNet, the most complex dataset, with these extra-links by 0.8%. As described in Sec. 3.2, our leanest architeture is equivalent to constraining each PointNet unit to be composed of a single layer network, and turning its operation into a memory-efficient block by removing intermediate activations from memory. In order to get a network of similar size, multiple such units are stacked to reach the same number of layers as the original network. Our convolution-type network wins on all counts, both on performance and efficiency. Indeed, the IoU is increased by 12.5% on ScanNet and 7.4% on PartNet compared to PointNet++ baseline. Regarding its efficiency, the memory footprint is decreased by 67% on average while decreasing both inference time (-41%) and the length of the backward pass (-68%). These improvements in speed can be seen as the consequence of processing most computations on flatten tensors and thus reducing drastically the complexity compared to PointNet++ baseline. In this work we have introduced new generic building blocks for point processing networks, that exhibit favorable memory, computation, and optimization properties when compared to the current counterparts of state-of-the-art point processing networks. When based on PointNet++, our lean architecture convPN wins on all counts, memory efficiency (-67% wrt. PointNet++) and speed (-41% and -68% on inference time and length of backward pass). Its deep counterpart has a marginal cost in terms of efficiency and achieves the best IoU on PartNet (+9.7% over PointNet++). Those generic and modular blocks exhibit similar performance on all of the additional tested architectures with a significant decrease in memory (up to -69%) and increase in IoU (up to +8.0%). From the promising on PartNet and the extremely low cost of depth in our architectures, we anticipate that adding these components to the armament of the deep geometry processing community will allow researchers to train the next generation of point processing networks by leveraging upon the advent of larger shape datasets (; In this section, we provide more details about how we design our lean architectures to ensure reproducible for all tested architectures, (i) PointNet++ (b), (ii) Dynamic Graph CNN (b), (iii) SpiderCNN and (iv) PointCNN (b). We implement each networks in Pytorch following the original code in Tensorflow and implant our blocks. To keep things simple and concise in this section, we adopt the following notations: • S(n): Sampling layer of n points; • rNN(r): query-ball of radius r; • MaxP: Max Pooling along the neighbourhood axis; •: Multi-resolution combination; • Lin(s): Linear unit of s neurons; • Drop(p): Dropout layer with a probability p to zero a neuron Inside our architectures, every downsampling module is itself based on FPS to decrease the resolution of the input point cloud. To get back to the original resolution, upsampling layers proceed to linear interpolation in the spatial space using the K u closest neighbours. To generate multiple resolutions of the same input point cloud, a downsampling ratio of 2 is used for every additional resolution (see Fig. 5). In all our experiments, we choose to report the performance of the multi-scale PointNet++ (MSG PN++) as it is reported to beat its alternative versions in the original paper on all tasks. We implement our own implementation of PointNet++ in Pytorch and choose the same parameters as in the original code. For segmentation task, the architecture is designed as follow: We omit here skiplinks for sake of clarity: they connect encoding and decoding modules at the same scale level. The mRes architecture consists in changing the way the sampling is done in the network to get a multiresolution approach (Fig. 4). We provide the details only for the encoding part of the network as we keep the decoding part unchanged from PointNet++. Encoding1: Starting from this architecture, we add Xlinks connection between each layer of mLPs to get our mResX architecture. A Xlink connection connects two neighbouring resolutions to merge information at different granularity. On each link, we use a sampling module (either downsampling or upsampling) to match the input to the target resolution. We use two alternatives for feature combination: (a) concatenation, (b) summation. In the later case, we add an additional sLP on each Xlink to map the input feature dimension to the target. To keep this process as lean as possible, we position the SLP at the coarser resolution, i.e. before the upsampling module or after the downsampling module. To simplify the writing, we adopt the additional notations: T ) where we make a sampling of s i points on each resolution i. When only one resolution is available as input, the block S([., s 1, s 2, ..., s n−1] T ) will sequentially downsample the input point cloud by s 1, s 2,.. points to create the desired number of resolutions. • Convolution block C([r 1, r 2, ..., r n] T ) is composed itself of three operations for each resolution i: neighborhood lookup to select the r i NN for each points, an sLP layer of the same size as its input and a max-pooling. • Transition block T ([t 1, t 2, ..., t n] T ) whose main role is to change the channel dimension of the input to one of the convolution block. An sLP of ouput dimension t i will be apply to the resolution i. We also add Xlinks inside each of the C blocks. In this architecture, the features are combined by summation and the links follow the same design as for mResX. Our deep architecture builds on convPN to design a deeper architecture. For our experiments, we double the size of the encoding part by repeating each convolution block twice. For each encoding segment, we position the sampling block after the third convolution block, so that the first half of the convolution blocks are processing a higher resolution point cloud and the other half a coarsen version. Starting from the authors' exact implementation, we swap each edge-conv layer, implemented as an MLP, by a sequence of single resolution convPN blocks. The set of convPN blocks replicates the succession of SLP used in the original implementation (to build their MLPs). To allow the use of residual links, a transition block is placed before each edge-conv layer to match the input dimension of our convPN blocks to their output dimension. A SpiderConv block can be seen as a bilinear operator on the input features and on a non-linear transformation of the input points. This non-linear transformation consists of changing the space where the points live. In the original architecture, an SLP is first applied to the transformed points to compute the points' Taylor expansion. Then, each output vector is multiplied by its corresponding feature. Finally a convolution is applied on the product. Therefore, the neighbourhood features can be built on-the-fly within the block and deleted once the outputs are obtained. We thus modify the backward pass to reconstruct the needed tensors for gradient computation. For PointCNN, we modify the χ-conv operator to avoid having to store the neighbourhood features for the backward pass. To do so, we make several approximations from the original architecture. We replace the first MLP used to lift the points by a sequence of convPN blocks. Thus, instead of learning a feature representation per neighbour, we retain only a global feature vector per representative point. We change as well the first fully connected layer used to learn the χ-transformation matrix. This new layer now reconstructs the neighbourhood features on-the-fly from its inputs and deletes it from memory as soon as its output is computed. During the backward pass, the neighbourhood features tensor is easily rebuilt to get the required gradients. We implement the same trick for the convolution operator applied to the transformed features. We further augment this layer with the task of applying the χ-transformation to the neighbourhood features once grouped. Finally, we place transition blocks between each χ-conv operation to enable residual links. In all our experiments, we process the dataset to have the same number of points N for each sample. To reach a given number of points, input pointclouds are downsampled using the furthest point sampling (FPS) algorithm or randomly upsampled. We keep the exact same parameters as the original networks regarding most of parameters. To regularize the network, we interleave a dropout layer between the last fully connected layers and parameterize it to zero 70% of the input neurons. Finally, we add a weight decay of 5e-4 to the loss for all our experiments. All networks are trained using the Adam optimizer to minimize the cross-entropy loss. The running average coefficients for Adam are set to 0.9 and 0.999 for the gradient and its square, respectively. The 2D counterpart of convolution operates in three steps: (i) neighbourhood exposure, (ii) matrix multiplication and (iii) pooling through a sum operator. Our convPN block follows the same steps but it can be seen as non-standard convolution as each weight matrix is constrained to be identical for all neighbours. To expose the neighbourhood, an intermediate tensor needs to be built to store neighborhood features. This tensor can be then used to gather and refine local information for each patch. This process has been used for image processing as the so-called im2col operation to rearrange discrete image blocks in a two-dimensional tensor. Exposing the neighbourhood simplifies the convolution to a simple matrix multiplication and thus fastens the convolution operation but does have a critical memory footprint if not handled properly. Indeed, neighborhood features as any other activations will be saved into memory to allow gradients to flow downward the graph. We design our memory efficient block to build the neighborhood matrix on-the-fly (see Algorithm 1 and 2) without the need to store neighborhood features. Algorithm 1: Low-memory grouping -Forward pass We evaluate our network on the point cloud segmentation task on three different datasets, ordered by increasing complexity: • ShapeNet-Part : CAD models of 16 different object categories composed of 50 labeled parts. The dataset provides 13, 998 samples for training and 2, 874 samples for evaluation. Point segmentation performance is assessed using the mean point Intersection over Union (mIoU). • ScanNet: Scans of real 3D scenes (scanned and reconstructed indoor scenes) composed of 21 semantic parts. The dataset provides 1, 201 samples for training and 312 samples for evaluation. We follow the same protocol as in Qi et al. (2017a) and report both the accuracy and the part Intersection over Union (pIoU). • PartNet : Large collection of CAD models of 17 object categories composed of 251 labeled parts. The dataset provides 17, 119 samples for training, 2, 492 for validation and 4, 895 for evaluation. The dataset provides a benchmark for three different tasks: fine-grained semantic segmentation, hierarchical semantic segmentation and instance segmentation. We report on the first task to evaluate our networks on a more challenging segmentation task using the same part Intersection over Union (pIoU) as in ScanNet. To report our , we use two versions of the Intersection over Union metric: • mIoU: To get the per sample mean-IoU, the IoU is first computed for each part belonging to the given object category, whether or not the part is in the sample. Then, those values are averaged across the parts. If a part is neither predicted nor in the ground truth, the IoU of the part is set to 1 to avoid this indefinite form. The mIoU obtained for each sample is then averaged to get the final score as, with n samples the number of samples in the dataset, cat(s), n cat(s) parts and P cat(s) the object category where s belongs, the number of parts in this category and the sets of its parts respectively. IoU s (p i) is the IoU of part p i in sample s. • pIoU: The part-IoU is computed differently. The IoU per part is first computed over the whole dataset and then, the values obtained are averaged across the parts as, with n parts the number of parts in the dataset, I s (p i) and U s (p i) the intersection and union for samples s on part p i respectively. To take into account the randomness of point cloud sampling when performing coarsening, we use the average of'N' forward passes to decide on the final segmentation during evaluation. The following section provides more details on the evaluation experiments introduced in the paper. We present the per-class IoU on both ShapeNet-Part (Table 5) and PartNet (Table 6) datasets for each of the PointNet++ based architecture. Due to the high number of points per sample and the level of details of the segmentation, PartNet can be seen as a much more complex than ShapeNet-Part. As additional reference, we provide on Table 8 the performance of our lean blocks applied to three architectures when training one network per-object category on PartNet (on Chairs and Tables that represents 60% of the dataset). On PartNet, the spread between an architecture with an improved information flow and a vanilla one becomes significant. Our PointNet++ based networks perform consistently better than the original architecture on each of the PartNet classes. Increasing the depth of the network allows to achieve a higher accuracy on the most complex classes such as Chairs or Lamps composed of 38 and 40 different part categories respectively. As shown on Fig. 6, our deep architecture is able to better capture the boundaries between parts and thus to predict the right labels very close from part edges. When a sample is itself composed of many parts, having a deep architecture is a significant advantage. For reference, we provide as well the absolute values for the efficiency of each of those networks measured by three different metrics on Table 7: (i) memory footprint, (ii) inference time and (iii) length of backward pass. We provide on Table 9 some additional experiments with KPConv network , a kernel point convolution approach. We report the efficiency with and without our modules, evaluated both in terms of memory consumption and forward/backward speed. Our modules successfully help to reduce the memory by up to 52.5% while having no impact on the speed of the forward or backward pass. Table 7: Efficiency of our network architectures measured with a batch size of 8 samples on a Nvidia GTX 2080Ti GPU. All of our lean architectures allow to save a substantial amount of memory on GPU wrt. the PointNet++ baseline from 58% with mRes to a 67% decrease with convPN. This latter convolution-type architecture wins on all counts, decreasing both inference time (-41%) and the length of backward pass (-68%) by a large spread. Starting form this architecture, the marginal cost of going deep is extremely low: doubling the number of layers in the encoding part of the network increases inference time by 6.3% on average and the memory consumption by only 3.6% at most compared to convPN) Parameters ( Figure 6: Segmentation prediction on the test set for both PointNet++ and our deepConvPN network compared to the ground truth. While PointNet++ struggles to detect accurately the boundaries between different parts, our deep architecture performs a much finer segmentation in those frontier areas.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgsgCVYwS
We introduce three generic point cloud processing blocks that improve both accuracy and memory consumption of multiple state-of-the-art networks, thus allowing to design deeper and more accurate networks.
End-to-end acoustic-to-word speech recognition models have recently gained popularity because they are easy to train, scale well to large amounts of training data, and do not require a lexicon. In addition, word models may also be easier to integrate with downstream tasks such as spoken language understanding, because inference (search) is much simplified compared to phoneme, character or any other sort of sub-word units. In this paper, we describe methods to construct contextual acoustic word embeddings directly from a supervised sequence-to-sequence acoustic-to-word speech recognition model using the learned attention distribution. On a suite of 16 standard sentence evaluation tasks, our embeddings show competitive performance against a word2vec model trained on the speech transcriptions. In addition, we evaluate these embeddings on a spoken language understanding task and observe that our embeddings match the performance of text-based embeddings in a pipeline of first performing speech recognition and then constructing word embeddings from transcriptions. The task of learning fixed-size representations for variable length data like words or sentences, either text or speech-based, is an interesting problem and a focus of much current research. In the natural language processing community, methods like word2vec BID0, GLoVE BID1, CoVe BID2 and ELMo BID3 have become increasingly popular, due to their utility in several natural language processing tasks. Similar research has progressed in the speech recognition community, where however the input is a sequence of short-term audio features, rather than words or characters. Therefore, the variability in speakers, acoustics or microphones for different occurrences of the same word or sentence adds to the challenge. Prior work towards the problem of learning word representations from variable length acoustic frames involved either providing word boundaries to align speech and text BID4, or chunking ("chopping" or "padding") input speech into fixed-length segments that usually span only one word BID5 BID6 BID7 BID8. Since these techniques learn acoustic word embeddings from audio fragment and word pairs obtained via a given segmentation of the audio data, they ignore the specific audio context associated with a particular word. So the ing word embeddings do not capture the contextual dependencies in speech. In contrast, our work constructs individual acoustic word embeddings grounded in utterance-level acoustics. In this paper, we present different methods of obtaining acoustic word embeddings from an attention-based sequence-to-sequence * Equal contribution model BID9 BID10 BID11 trained for direct Acoustic-to-Word (A2W) speech recognition BID12. Using this model, we jointly learn to automatically segment and classify input speech into individual words, hence getting rid of the problem of chunking or requiring pre-defined word boundaries. As our A2W model is trained at the utterance level, we show that we can not only learn acoustic word embeddings, but also learn them in the proper context of their containing sentence. We also evaluate our contextual acoustic word embeddings on a spoken language understanding task, demonstrating that they can be useful in non-transcription downstream tasks. Our main contributions in this paper are the following: 1. We demonstrate the usability of attention not only for aligning words to acoustic frames without any forced alignment but also for constructing Contextual Acoustic Word Embeddings (CAWE). 2. We demonstrate that our methods to construct word representations (CAWE) directly from a speech recognition model are highly competitive with the text-based word2vec embeddings BID0, as evaluated on 16 standard sentence evaluation benchmarks. 3. We demonstrate the utility of CAWE on a speech-based downstream task of Spoken Language Understanding showing that pretrained speech models could be used for transfer learning similar to VGG in vision BID13 or CoVe in natural language understanding BID2. A2W modeling has been largely pursued using Connectionist Temporal Classification (CTC) models BID14 and Sequence-to-Sequence (S2S) models BID9. Prior work shows the need for large amounts of training data for these models (thousands of hours of speech) with large word vocabularies of frequently occurring words BID15 BID16 BID17 BID18 BID19. Progress in the field showed the possibility of training these models with smaller amount of data (300 hours Switchboard corpus BID20) but restricting the vocabulary to words occurring atleast 5 or 10 times BID21 BID22. The solutions to generate out-of-vocabulary words have revolved around backing off to smaller units like characters or sub-words BID21 BID16 BID17 BID22 BID19. While this solves the problem of rare word generation, the models are no longer pure-word models. present one of the first S2S models for pure-word large vocabulary A2W recognition with the 300 hour Switchboard corpus with a vocabulary of about 30,000 words. BID24 BID12 build upon their work and improve the training of these models for the large vocabulary task. BID12 is one of our previous works where we show that the direct A2W model is also able to automatically learn word boundaries without any supervision and is the current best pure-word S2S model. We use the same model in this work and expand it towards learning acoustic embeddings. BID4 BID5 BID7 BID6 BID25 BID8 all explore ways to learn acoustic word embeddings. All above methods except BID6 use unsupervised learning based methods to obtain these embeddings where they do not use the transcripts or do not perform speech recognition. BID6 use a supervised Convolutional Neural Network based speech recognition model but with short speech frames as input that usually correspond to a single word. This is the common practice in most prior work that simplifies training but prevents the models to scale to learn contextual word embeddings grounded in utterance level acoustics. BID4 propose an unsupervised method to learn speech embeddings using a fixed context of words in the past and future. The drawbacks of their method are the fixed context and need for forced alignment between speech and words for training. Learning text-based word embeddings is also a rich area of research with well established techniques such as BID0 BID1. Research has further progressed into learning contextualized word embeddings BID2 BID3 that are useful in many text-based downstream tasks BID26. BID2 learns contextual word embeddings from a fully trained machine translation model and depict re-use of their encoder in other downstream tasks. Our work ties A2W speech recognition model with learning contextual word embeddings from speech. Our S2S model is similar in structure to the Listen, Attend and Spell model BID10 which consists of 3 components: the encoder network, a decoder network and an attention model. The encoder maps the input acoustic features vectors a = (a1, a2, ..., aT) where ai ∈ R d, into a sequence of higher-level features h = (h1, h2, ..., h T). The encoder is a pyramidal (sub-sampling) multi-layer bi-directional Long Short Term Memory (BLSTM) network. The decoder network is also an LSTM network that learns to model the output distribution over the next target conditioned on sequence of previous predictions i.e. P (y l |y * DISPLAYFORM0) is the ground-truth label sequence. In this work, y * i ∈ U is from a word vocabulary. This decoder generates targets y from h using an attention mechanism. We use the location-aware attention mechanism BID11 that enforces monotonicity in the alignments by applying a convolution across time to the attention of previous time step. This convolved attention feature is used for calculating the attention for the current time step which leads to a peaky distribution BID11 BID12. Our model follows the same experimental setup and model hyper-parameters as the word-based models described in our previous work BID12 with the difference of learning 300 dimensional acoustic feature vectors instead of 320 dimensional. We now describe our method to obtain the acoustic word embeddings from the end-to-end trained speech recognition system described in Section 3. The model is as shown in Figure 1 where the embeddings are constructed using the hidden representations obtained from the encoder and the attention weights from the decoder. Our method of constructing "contextual" acoustic word embeddings is similar to a method proposed for text embeddings, CoVe BID2. The main challenge that separates our method from CoVe BID2 in learning embeddings from a supervised task, is the problem of alignment between input speech and output words. We use the location-aware attention mechanism that has the property to assign higher probability to certain frames leading to a peaky attention distribution. We exploit this property of location-aware attention in an A2W model to automatically segment continuous speech into words as shown in our previous work BID12, and then use this segmentation to obtain word embeddings. In the next two subsections, we formalize this process of constructing contextual acoustic word embeddings. Intuitively, attention weights on the acoustic frames hidden representations reflect their importance in classifying a particular word. They thereby provide a correspondence between the frame and the word within a given acoustic context. We can thus construct word representations by weighing the hidden representations of these acoustic frames in terms of their importance to the word i.e. the attention weight. We show this in the Figure 1 wherein the hidden representations and their attention weights are colored according to their correspondence with a particular word. Given that aj represents the acoustic frame j, let encoder(aj) represent the higher-level features obtained for the frame aj (i.e. encoder(aj) = h = (h1, h2, ..., h T), as explained in Section 3). Then, for the i th word wi our model first obtains the mappings of wi to acoustic frames aK where K is the set such that ∀k ∈ K k = arg max j (attention(aj)) over all utterances U containing the word wi in the training set. Below we describe three different ways of using attention to obtain acoustic word embeddings for a word wi (here, n(K) represents the cardinality of the set K): DISPLAYFORM0 Therefore, unweighted Average (U-AVG, Equation 1) is just the unweighted combination of all the hidden representations of acoustic frames mapped to a particular word. Attention weighted Average (CAWE-W, Equation 2) is the weighted average of the hidden representations of all acoustic frames using the attention weights for a given word. Finally, maximum attention (CAWE-M, Equation 3) is the hidden representation of the acoustic frame with the highest attention score for a given word across all utterances in the training data. We call the attention-weighted average and the maximum attention based techniques as Contextual Acoustic Word Embeddings (CAWE) since they are contextual owing to the use of attention scores (over all acoustic frames for a given word). We use a commonly used speech recognition setup, the 300 hour Switchboard corpus (LDC97S62) BID20 which consists of 2,430 twosided telephonic conversations between 500 different speakers and contains 3 million words of text. Our second dataset is a 300 hour subset of the How2 BID27 dataset of instructional videos, which contains planned, but free speech, often outdoor and recorded with distant microphones, as opposed to the indoor, telephony, conversational speech of Switchboard. There are 13,662 videos with a total of 3.5 million words in this corpus. The A2W obtains a word error rate of 22.2% on Switchboard and 36.6% on CallHome set from the Switchboard Eval2000 test set and 24.3% on dev5 test set of How2. Datasets for Downstream Tasks: We evaluate our embeddings by using them as features for 16 benchmark sentence evaluation tasks that cover Semantic Textual Similarity (STS 2012-2016 and STS B), classification: Movie Review (MR), product review (CJ), sentiment analysis (SST, SST-FG), question type (TREC), Subjectivity/Objectivity (SUBJ), and opinion polarity (MPQA), entailment and semantic relatedness using the SICK dataset for SICK-E (entailment) and SICK-R (relatedness) and paraphrase detection (MRPC). The STS and SICK-R tasks measure Spearman's coefficient of correlation between embedding based similarity and human scores, hence the scores range from [−1, 1] where higher number denotes high correlation. All the remaining tasks are measured on test classification accuracies. We use the SentEval toolkit BID26 to evaluate. Training Details: In all downstream evaluations involving classification tasks, we have used a simple logistic regression for classification since a better representation should lead to better scores without using complicated models (hence abstracting away model complexities from our evaluations). This also means that we can use the concatenation of CAWE and CBOW as features to the logistic regression model without adding tunable embedding parameters. Discussion: From the in TAB1 we see that CAWE-M outperforms U-AVG by 34% and 13% and CAWE-W by 33.9% and 12% on Switchboard and How2 datasets respectively in terms of average performance on STS tasks and leads to better or slightly worse performance on the classification tasks. We observe that CAWE-W usually performs worse than CAWE-M which could be attributed to a noisy estimation of the word embeddings on the account of taking even the less confident attention scores while constructing the embedding. In contrast, CAWE-M is constructed using the most confident attention score obtained over all the occurrences of the acoustic frames corresponding to a particular word. We also observe that U-AVG performs worse than CAWE-W on STS and SICK-R tasks since it is constructed using an even noisier process in which all encoder hidden representations are weighted equally irrespective of their attention scores. Datasets for Downstream Tasks: The datasets are the same as described in Section 5.1.Training Details: In all the following comparisons, we compare embeddings obtained only from the training set of the speech recognition model, while the text-based word embeddings are obtained by training Continuous Bag-of-Words (CBOW) word2vec model on all the transcripts (train, validation and test). This was done to ensure a fair comparison between our supervised technique and the unsupervised word2vec method. This naturally leads to a smaller vocabulary for CAWE. Further, one of the drawbacks of A2W speech recognition model is that it fails to capture entire vocabulary, recognizing only 3044 words out of 29874 (out of which 18800 words occur less than 5 times) and 4287 out of 14242 total vocabulary for Switchboard and How2 respectively. Despite this fact, the performance of CAWE is very competitive with word2vec CBOW which does not TAB2, we see that our embeddings perform as well as the text-embeddings. Evaluations using CAWE-M extracted from Switchboard based training show that the acoustic embeddings when concatenated with the text embeddings outperform the word2vec embeddings on 10 out of 16 tasks. This concatenated embedding shows that we add more information with CAWE-M that improves the CBOW embedding as well. The gains are more prominent in Switchboard as compared to the How2 dataset since How2 is planned instructional speech whereas Switchboard is spontaneous conversational speech (thereby making the How2 characteristics closer to text leading to a stronger CBOW model). Dataset: In addition to generic sentence-level evaluations, we also evaluate CAWE on the widely used ATIS dataset BID28 for Spoken Language Understanding (SLU). ATIS dataset is comprised of spoken language queries for airline reservations that have intent and named entities. Hence, it is similar in domain to Switchboard, making it a useful test bed for evaluating CAWE on a speech-based downstream evaluation task. Training Details: For this task, our model is similar to the simple Recurrent Neural Network (RNN) based model architecture as investigated in BID29. Our architecture is comprised of an embedding layer, a single layer RNN-variant (Simple RNN, Gated Recurrent Unit (GRU)) along with a dense layer and softmax. In each instance, we train our model for 10 epochs with RMSProp (learning rate 0.001). We train each model 3 times with different seed values and report average performance. Discussion: BID29 concluded that text-based word embeddings trained on large text corpora consistently lead to better performance on the ATIS dataset. We demonstrate that direct speech-based word embeddings could lead to matching performance when compared to text-based word embeddings in this speech-based downstream task, thus highlighting the utility of our speech based embeddings. Specifically, we compare the test scores obtained by initializing the model with CAWE-M, CAWE-W and CBOW embeddings and fine-tuning them based on the task. We present a method to learn contextual acoustic word embeddings from a sequence-to-sequence acoustic-to-word speech recognition model that learns to jointly segment and classify speech. We analyze the role of attention in constructing contextual acoustic word embeddings, and find our acoustic embeddings to be highly competitive with word2vec (CBOW) text embeddings. We discuss two variants of such contextual acoustic word embeddings which outperform the simple unweighted average method by upto 34% on semantic textual similarity tasks. The embeddings also matched the performance of text-based embeddings in spoken language understanding, showing the use of this model as a pre-trained model for other speech-based downstream tasks. We surmise that contextual audio embeddings will generalize and improve downstream tasks in a way that is similar to their text counterparts, despite the additional complexity presented by noisy audio input. In the future, we will explore ways to scale our model to larger corpora, larger vocabularies and compare with non-contextual acoustic word embedding methods. This work was supported by the Center for Machine Learning and Health (CMLH) at Carnegie Mellon University and by Facebook.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJlmNI0ojQ
Methods to learn contextual acoustic word embeddings from an end-to-end speech recognition model that perform competitively with text-based word embeddings.
Unsupervised monocular depth estimation has made great progress after deep learning is involved. Training with binocular stereo images is considered as a good option as the data can be easily obtained. However, the depth or disparity prediction show poor performance for the object boundaries. The main reason is related to the handling of occlusion areas during the training. In this paper, we propose a novel method to overcome this issue. Exploiting disparity maps property, we generate an occlusion mask to block the back-propagation of the occlusion areas during image warping. We also design new networks with flipped stereo images to induce the networks to learn occluded boundaries. It shows that our method achieves clearer boundaries and better evaluation on KITTI driving dataset and Virtual KITTI dataset. Monocular depth estimation becomes an active research topic as deep learning is applied in various computer vision tasks. It has many applications, from navigation through to scene understanding. A single traditional camera can be a cheaper alternative to the expensive LIDAR sensor for automotive cars if accurate estimation can be achieved. Meanwhile, single camera simplifies the design of depth estimation solution which can be adopted quite widely at a low cost. One straight-forward way to train deep depth estimation models is to use ground truth depth images as the supervision signals BID1. However, supervised deep learning method is eager for massive data with ground truth. Collecting large datasets with ground truth depth in varied real scenarios is challenge and expensive. Instead, training using stereo images without depth label is an alternative option. BID7 proposed a method to exploit the left-right consistency of stereo images to tackle the monocular depth estimation, which achieved quite promising . However, the depth predicted by their method has blurred boundaries. The issue is mainly due to the occlusions during the image warping. Though it can be alleviated in some extent with proper post processing, the fundamental problem is not well addressed. In this paper, we propose a new method to overcome the blurred boundaries when using stereo pairs to train the monocular depth model. An example is illustrated in FIG0. During the image warping, we generate an occlusion mask using the disparity map to block the inappropriate back-propagation gradients for occlusion areas. However, the mask only cannot guarantee clear boundaries as there is no constrain for the masked areas. Then we design new networks to fully exploit the information of stereo images. With flipped stereo pairs, the network is induced to learn clear boundaries for occlusion areas. Our method provides a solution to the fundamental learning difficulty of occluded areas introduced by image warping in depth estimation. Empirical evaluation on KITTI driving dataset BID6 ) and Virtual KITTI dataset BID4 ) demonstrates the effectiveness of our approach. Moreover, we find the depth label of KITTI 2015 is usually very sparse near the object boundaries, which is not very sensitive to evaluate the clearness of boundaries. Large amounts of multi-view based approaches have been proposed such as stereo matching BID22 ), difference view point BID3 ) or temporal sequence BID18 ). Here we briefly review work based on single view depth estimation which is usually harder since reasoning depth from monocular colored image is an ill-posed problem. The most intuition way is treating monocular depth estimation problem as a supervised problem by taking RGB images as inputs and Lidar depth points as ground truth. BID21 proposed a method known as Make3d which breaks the image into homogeneous patches. For each small homogeneous patch, Saxena et al used a Markov Random Field (MRF) to infer a set of plane parameters that capture both the 3D location and 3D orientation. However, this approach has a hard time capturing thin structures since the predictions are made locally. BID1 first exploited CNN in a coarse to fine manner. BID13 BID12 proposed a network jointly explore the capacity of deep CNN and continuous conditional random field (CRF). BID10 incorporated semantic segmentations in to single-view depth estimation task since they are closely tied to the property of perspective geometry. BID11 proposed a residual network with up-sampling module using fully convolutional architecture. Also, reverse Huber loss was introduced. However, large amount of high-quality labelled data is needed, which is hard to require in practice. To overcome the lack of high quality labelled data, several semi-supervised and fully unsupervised methods have been proposed. BID2 first proposed a view synthesis based method called DeepStereo which generates new view image from nearby image. BID24 FIG2 proposed a method which generates the right image through probability distribution over all the possible disparities for each pixel. BID5 first proposed a warp based method by aligning reconstructed image with the ground truth left image as described in 3.1. However, their loss is not fully differentiable. BID7 improve this methods by introducing a novel loss. BID19 extended the network into two separate channel with 6 or 12 losses which improves the . Based on Garg et al, BID9 proposed a semi-supervised methods that exploited both the sparse Lidar points as supervision and stereo pairs as unsupervion signals. The semisupervised method was further improved by BID15, they decoupled the monocular depth prediction problem into two procedure, a view synthesis procedure followed by stereo matching. Recently, several work using only monocular temporal sequence comes out which enables more training data such as video sequence on YouTube. BID26 proposed a network that predicts depth and camera pose separately. Using the predicted depth and camera pose, relative temporal image can be reconstructed by image warping with which final loss can be constructed. BID16 performed a novel 3D loss that enforces consistency of the estimated 3D point clouds and ego-motion across consecutive frames and combined it with 2D photometric loss. proposed a differentiable implementation of Direct Visual Odometry (DVO) and a novel depth normalization strategy. However, all the temporal sequence based training meet the same problem of object motion. This problem can be alleviated by including stereo pairs during training known as trinocular training BID25 BID17 BID8 ). However, all these warp based methods have the difficulty of learning occluded area which would infect the . Godard et al achieves state of art of unsupervised monocular depth estimation with only stereo images. Follow Godard et al, we proposed monocular depth estimation network with novel mask methods which can be trained end-to-end and without ground-truth label. Our method is superior to Godard et al in quality with clearer boundaries, especially on dense evaluation dataset such as virtual-KITTI BID4 ). In general, our goal is to learn a network that can predict a pixel-wise dense depth map from single colored image(DISPLAYFORM0 However, all supervised methods have a hard time with acquiring large dense labelled data in real scenarios. Thus, several unsupervised methods have been proposed to overcome the obstacle. Among these methods, training by image reconstruction using rectified stereo pairs became more and more popular currently due to its high accuracy and easy accessibility of training data. BID5, Left-right consistency network proposed by Godard et al, Our refined network without shared parameters and our refined network with shared parameters First proposed by BID5, the monodepth estimation network takes left image (I l) as input and outputs the disparity aligned with the right image (d r) which can be used to sample from the left image (I l) to reconstruct the right image (Ĩ r) during training. Thus, image reconstruction loss can be constructed between the reconstructed right image(Ĩ r) and original right image(I r). When testing, only one colored image are required, the predicted disparity can be converted to depth simply using depth = b * f /disparity, where b and f are given baseline and camera focal length respectively. It is worth to mention that disparities (d l and d r) are a scalar per pixel as the images are rectified. The network was further improved by BID7 by introducing left-right consistency loss and refined encoder-decoder network. Given the input left image, the network predicts both the left and right disparities simultaneously which enables constructing the left-right consistency. This consistency restriction leads to more accurate and less artifacts. Also, fully differentiable backward bilinear sampling was used to reconstruct the left image which makes the model easier to optimize. With better network architecture and better loss, Godard et al achieved the state of art of unsupervised monodepth estimation only with rectified stereo pairs, and even outperforming supervised methods. Their network architectures are shown in FIG1.However, there still are unsatisfactory artifacts at the occlusion boundaries showing blurred ramps on the left side of image and of the occluders. Even with the post-processing step which weighted sums the flipped disparity of the flipped input image and the disparity of the input image, the blurred ramps are still visible near the objects, especially for those near the cameras with higher disparities as illustrated in FIG0. Though common backward warping using bilinear sampler is fully differentiable which enables us to train the model end to end, some undesirable duplicates and artifacts are introduced during the warping process because of occlusions according to BID14. In FIG2, we use SYTHIA dataset BID20 ), a synthesized virtual driving dataset to illustrate. Though ground truth disparities are used for warping, there still exists obvious duplicates and even a huge black region on the left of the reconstructed image(Ĩ l) because of the occlusion. If those inevitable artifacts and duplicates are back propagated during the training process, unwanted high losses will be introduced forcing the network learn to blur in those regions, as the blurriness(disparity ramps) in the occluded regions will make the reconstructed image show stretched patterns (Fig 7) which are more similar to original ground truth image compared to duplicates and large black regions FIG2 ).. We used a warping mask which is generated automatically from disparities to block the back-propagation of those artifacts. The final output is shown in (e), where the white regions are masked. In order to block the back propagation process of those warping induced artifacts, we designed an algorithm which can mask the occlusion region automatically. In backward bilinear sampling process, a disparity map was used to sample from the source image. The intuition here is that, if any pixel in the source image has never been sampled during the bilinear sampling process, this pixel should only be visible in the source image and should be masked when reconstructing this source image later on. Thus, the mask methods takes, say, left disparity as input while generating the mask of reconstructed right image and vice versa. The pseudo code is shown in Algorithm 1 and the mask schematic diagram is shown in Fig 4. However, the mask method alone cannot guarantee the clearness since no constrain is added in masked region. Specifically, though the masks block the back propagation of duplicates and artifacts induced by warping, they also block the further learning process of those regions. Once the disparities start to blur, hardly can we correct the network back to clearness. To solve this problem, we refined the network architecture and introduced a flip-over training scheme. Though the mask blocks the process of further learning of the blurred regions, we find that the blurred side are definite which can be exploited to reactive the learning process. For example, when the network is only trained on left images and takes corresponding right images as ground truth, the disparity ramps (blurred regions) will only appear on the left side of the occluders. So, if we randomly flipped the input images horizontally, the disparity ramps will still appear on the left side. When flipped the output disparities back for warping, the blurred regions will appear on the right side where no mask is added. Examples are shown in the last column of Fig 7. Thus, those blurred regions will make distortions when reconstructing images, and back propagate despite the 3 for x,y in grid(h,w) do DISPLAYFORM0 13 end 14 end 15 return mask l, mask r mask. Also, because the flip-over scheme is performed randomly, any blurriness on the definite side will receive punishment on average which restrict the prediction to be clear. It is worth to mention that flip-over scheme will not affect masking process and we still use masks to block the back propagation of duplicates. The flip-over schematic diagram is shown in Fig. 4.However, the predicted disparities of the right branch won't make any sense if we take the flipped left image as input and are totally mismatched with the ground truths as is shown in Fig 4. As a , we delete the predicting branch of the right disparity and add another encoder-decoder network which takes right images as input and predicts right disparities as is shown in FIG1 This doubled encoder-decoder network enables us to preform left-right consistency loss at the cost of doubling the training parameters. However, it won't slow down the test speed since only one branch of encoder-decoder network is used when testing. We also tried another network architecture with shared the encoder-decoder network which achieves comparable as non-shared network while halves the training time. More details can be found in 6.1 We use similar training loss as BID7. The losses are performed on four different scale and are finally summed as the total loss. C = 4 s=1 C s. For each scale, three different losses including appearance match loss(C ap), disparity smoothness loss(C ds) and LR consistency loss(C lr) are performs as follows. DISPLAYFORM0 Intuitively, appearance matching loss measures photometric error between the reconstructed image(Ĩ l ij)) and the ground truth image(I l ij), which is defined as the weighted sum of L1 and SSIM shown as follows, DISPLAYFORM1 Disparity smoothness loss aims to force the smoothness of predicted disparities through Figure 4: Schematic diagram. The orange squares refer to an object such as pedestrians or vehicles. The blue and black squares refer to blurred disparity region and masked region respectively. Left: Naive masking process. Warping mask(black) is overlap with blurred region(blue). As a , masks will block the learning of those region. Right: Flip-over training scheme. We flipped the left images(f (I l)) as input and flipped the output disparities(f (d(f (I l)))) back to reconstruct the image(Ĩ l). Different from the diagram shown in left, the blurred region will switch to the opposite side to avoid the mask. As a , losses will be introduced leading to clear boundaries. We can also observed that the right branch (flipped predicted right disparity of flipped left image(f (d(I r)))) is totally mismatch with the ground truth right image, so we delete this branch and add another encoder-decoder branch as shown in FIG1. DISPLAYFORM2 Finally, L1 penalty is added to constrict the left-right consistency, which further reduce the artifacts, DISPLAYFORM3 Our network is based on BID7 and is implemented in Tensorflow (Abadi et al.). The network contains encoder network based on VGG16 or Resnet50 and decoder network with 7 upconvolutional layers, 7 merging layers and 4 disparity prediction layers. With 7 skip connections, the network can better handles features at different scales. Different from Godard et al, we modified the channel number of disparity prediction layers to predict only one disparity instead of two. Also, we tuned the default hyper parameters α and α ds. The rest are the same with Godard et al with α ap = 1, α lr = 1. The learning rate λ remains 10 −4 for the first 30 epoch and halves every 10 epoch when trained for 50 epoch. We also tried batch norm but might lead to unstable . Data augmentation is performed on the fly similar as Godard et al including flipping and color shifting. We trained our model on rectified stereo image pairs in KITTI and Cityscapes dataset and evaluated mainly on KITTI split BID6 ), Eigen split BID1 ) and virtual-KITTI datasets BID4 ). Also, we find the problem of KITTI 2015 evaluation such as sparsity and man-made defects which infects our evaluation . When testing on dense datasets like virtual-KITTI, the superiority becomes more obvious. For comparison, we use the same split as BID7 high quality disparity image issued by official KITTI dataset. It is worth mentioned that though these disparity images are of better quality than projected velodyne laser point and with CAD models inserted in cars, most part of ground truth are extremely sparse, especially in terms of occluded regions. As the red boxes in the FIG3 shown, there is some blank space on the left side of image and occluded regions which is not involved in the evaluation. Thus, those blank space just covers the shortcomings the disparity ramps on the occluded regions. As a , our shows less superiority over Godard et al on KITTI stereo 2015 dataset. As shown in TAB2, we use the metrics from BID1 and D1-all metrics from KITTI. Our model achieves comparable with Godard et al when both trained on KITTI dataset only for 50 epoch, and superior when both trained longer for 100 epoch. BID6 ). K means the model is only trained on KITTI dataset, while CS + K means the model is trained on Cityscapes dataset and then finetune on KITTI dataset. Also, pp means post processing step, and more details about post processing can be found in 6.3. For a fair comparison, we train the network proposed by Godard et al and our non-shared network both to 50 and 100 epoch, and find that our network has larger improvement and better performance than BID7 when trained longer to 100 epoch. We guess that our network requires longer time to converge. We choose different hyperparameters from Godard et al for our network. Though it is unfair to evaluation on sparse KITTI dataset, our still outperforms that of Godard et al. Similar to Godard et al, we use the test split of 697 images proposed by BID1. Each image contains 3D points captured by velodyne laser which was used to generate ground truth depth. We keep the same 22600 stereo pairs for training. According to Godard et al, all evaluation are done under the Garg crop BID5 ) except for Eigen for a fair comparison. We also present the uncropped which our models' superiority become more obvious, because it will crop the disparity ramps on the left side for evaluation which boosting Godard et al's . Also, ground truths are captured by velodyne Lidar thus rather sparse which would reduce our superiority over Godard et al. The evaluation is in TAB4 4.3 VIRTUAL-KITTITo prevent sparsity induced inaccurate , we evaluate models on the Virtual-KITTI dataset BID6 ). Virtual-KITTI dataset contains 50 monocular videos generated from five different virtual worlds in urban settings under different weather conditions with corresponding pixel-level ground truth depth. With the same resolution, scenes and camera parameters as KITTI 2015, Virtual-KITTI dataset can be implement naively when testing. Since KITTI 2015 dataset we used for training does not cover those weathers, only 2126 labbeled images without weather condi-Method Supervised Dataset Abs Rel Sq Rel RMSE RMSE log δ < 1.25 δ < 1.25 2 δ < 1.25 3 BID1 BID1. K is KITTI and CS is cityscapes dataset for training. We use the evaluation provided in BID7. For a fair comparison, we use the crop the same as BID5 except for BID1 and apply the same hyper-parameters as Godard et al on our model. Besides, we set the maximun evaluation depth to 50 meters(cap 50m) in the second row which is the same as BID5, while others remain 80 meters. Also, we compared uncropped with Godard et al, on which the superiority become more obvious.tion are used for evaluation. The evaluation is shown in TAB5: Evaluation on virtual-KITTI. Once our ground truth become dense, our model out performance other models for its sharp and clear boundaries on predicted depth map. Even we crop the black edges of Godard et al (the left 10% of the whole disparity map) in the second row, our model is still superior to BID7 (the state of art unsupervised monocular depth estimation using left and right information only). We evaluate on 3 different methods: naive, crop the black edge, both crop the edge and set the maximun evaluation depth to 50 meters In this work, we present an occlusion mask and filp-over training scheme to enable effective learning of object boundaries when using image warping. With our new network, our model achieves state of art using only stereo images. Moreover, as warping based image reconstruction is commonly used in depth estimation problem, our method provides a solution to the fundamental difficulty of occluded areas introduced by image warping. In the future, our method can be incorporated with more accurate network trained on trinocular data (temporal Stereo sequence) such as BID25, BID17 and BID8, which would further boost the accuracy.6 SUPPLEMENTARY MATERIALS The shared weight network is similar to the non-shared weight network except for the shared weight. The architecture is shown in 2. Fortunately, the shared weight model naturally handled the problem. Because the blurriness always occur in the occluded region, say, disparity ramps (burriness) in left disparity maps will only appear on the left side of the occluders, the network cannot distinguish which side the input image belongs to and which side to blur when trained on the left and right image simultaneously with shared weights. Thus, any wrong blurriness (blurriness on the wrong side, say, right blurriness in the left disparity maps) will lead to punishment in raising reconstructed loss since no mask was adding on the other side (wrong side, say, right side). As a , the network achieves comparable as non-shared network while halves the training time. However, shared method is slightly inferior to non-shared network but still out performance Godard FIG0, Godard et al used non-linear weight. Because our model do not predict black edge, so we simply average them as our post-processing. Here we use the network by BID7 to illustrate how flip-over scheme could solve the problem of back-propagate, so the predicted disparities are rather blurred. In Fig 7, from left to right: left disparity(d l), reconstructed left image(Ĩ l), corresponding mask(mask l), ground truth(I l), flipped disparity of flipped input imagef (d(f (I l))). To minimize the reconstruction error, the network predicts blurred forcing the network to sample from the , which leads to more similar stretched patterns instead of high loss duplicates as shown in FIG2. However, the blurred regions are aligned with the mask, which freezes the fine-tuning process of blurred regions. To solve the problem, we randomly flip the colored image as input(Input = f (I l)), then flipped the output back as final disparities for warping(f (d(f (I l)))). As shown in the last column, the blurred regions are no longer aligned with the masks which enables further learning process. As the training process goes on, the boundaries will become clearer as shown in FIG0. We also use schematic diagram to illustrate in Fig. 4 Figure 7: Example of blurred and flip-over scheme.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1fs4oRqKm
This paper propose a mask method which solves the previous blurred results of unsupervised monocular depth estimation caused by occlusion
Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations. Convolutional Neural Networks (CNNs) offer a very appealing alternative. However, processing graphs with CNNs is not trivial. To address this challenge, many sophisticated extensions of CNNs have recently been proposed. In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. Despite its simplicity, our method proves very competitive to state-of-the-art graph kernels and graph CNNs, and outperforms them by a wide margin on some datasets. It is also preferable to graph kernels in terms of time complexity. Code and data are publicly available. Graphs, or networks, are rich, flexible, and universal structures that can accurately represent the interaction among the components of many natural and human-made complex systems BID11. For instance, graphs have been used to describe and analyze the interplay among proteins within cells and the internal structure of proteins themselves BID3, the organization of the brain BID6, the World Wide Web BID26, textual documents BID22, and information propagation through a population BID18. Consequently, graph mining has attracted significant attention in machine learning and artificial intelligence, and is still today a very active area of investigation. A central graph mining task is that of graph classification (not to be mistaken with node classification). Its applications range from determining whether a protein is an enzyme or not in bioinformatics to categorizing documents in NLP and analyzing a social network. Graph classification is the task of interest in this study. The state-of-the-art in graph classification is currently dominated by a family of methods referred to as graph kernels. Graph kernels compute the similarity between two graphs as the sum of the pairwise similarities between some of their substructures, and then pass the similarity matrix computed on the entire dataset to a kernel-based supervised algorithm such as the Support Vector Machine BID8 to learn soft classification rules. Graph kernels mainly vary based on the substructures they use, which include random walks BID12, shortest paths, and subgraphs BID29, to cite only a few. While graph kernels have been very successful, they suffer significant limitations:1. High time complexity. This problem is threefold: first, populating the kernel matrix requires computing the similarity between every two graphs in the training set (say of size N), which amounts to N (N −1) /2 operations. The cost of training therefore increases much more rapidly than the size of the dataset. Second, computing the similarity between a pair of graphs (i.e., performing a single operation) is itself polynomial in the number of nodes. For instance, the time complexity of the shortest path graph kernel is O(|V 1 | 2 |V 2 | 2) for two graphs (V 1, V 2), where |V i | is the number of nodes in graph V i. Processing large graphs can thus become prohibitive, which is a serious limitation as big networks abound in practice. Finally, finding the support vectors is O(N 2) when the C parameter of the SVM is small and O(N 3) when it gets large BID4, which can again pose a problem on big datasets.2. Disjoint feature and rule learning. With graph kernels, the computation of the similarity matrix and the learning of the classification rules are two independent steps. In other words, the features are fixed and not optimized for the task.3. Graph comparison is based on small independent substructures. As a , graph kernels focus on local properties of graphs, ignoring their global structure . They also underestimate the similarity between graphs and suffer unnecessarily high complexity (due to the explosion of the feature space), as substructures are considered to be orthogonal dimensions BID35. We propose a very simple approach to turn a graph into a multi-channel image-like structure suitable to be processed by a traditional 2D CNN. It can be broken down into 3 steps, summarized in FIG0. The first step involves embedding the nodes of the graph. The embedding space is compressed with PCA at step 2. We then repeatedly extract 2D slices from the compressed space and compute a 2D histogram for each slice. The "image" representation of the graph is finally given by the stack of its 2D histograms (each histogram making for a channel). Note that the dimensionality of the final representation of a graph does not depend on its number of nodes or edges. Big and small graphs are represented by images of the same size. Our method addresses the limitations of graph kernels in the following ways:1. High time complexity. By converting all graphs in a given dataset to representations of the same dimensionality, and by using a classical 2D CNN architecture for processing those graph representations, our method offers constant time complexity at the instance level, and linear time complexity at the dataset level. Moreover, state-of-the-art node embeddings can be obtained for a given graph in linear time (w.r.t. the size of the graph), for instance with node2vec BID14. 2. Disjoint feature and rule learning. Thanks to the 2D CNN classifier, features are learned directly from the raw data during training to optimize performance on the downstream task. 3. Graph comparison is based on small independent substructures. Our approach capitalizes on state-of-the-art graph node embedding techniques that capture both local and global properties of graphs. In addition, we remove the need for handcrafted features. Convolutional Neural Networks (CNNs) are feedforward neural networks specifically designed to work on regular grids. A regular grid is the d-dimensional Euclidean space discretized by parallelo-topes (rectangles for d = 2, cuboids for d = 3, etc.). In CNNs, each neuron in a given layer receives input from a neighborhood of the neurons in the previous layer BID20. Those neighborhoods, or local receptive fields, allow CNNs to compose higher-level features from lower-level features, and thus to capture patterns of increasing complexity in a hierarchical way. Regular grids satisfy the spatial dependence 2 property, which is the fundamental premise on which local receptive fields and hierarchical composition of features in CNNs hold. Traditionally, a graph G(V, E) is encoded as its adjacency matrix A or Laplacian matrix L. A is a square matrix of dimensionality |V | × |V |, symmetric in the case of undirected graphs, whose (i, j) th entry A i,j is equal to the weight of the edge e i,j between nodes v i and v j, if such an edge exists, or to 0 otherwise. On the other hand, the Laplacian matrix L is equal to D − A, where D is the diagonal degree matrix. One could initially consider passing one of those structures as input to a 2D CNN. However, unlike in images, where close pixels are more strongly correlated than distant pixels, adjacency and Laplacian matrices are not associated with spatial dimensions and the notion of Euclidean distance, and thus do not satisfy the spatial dependence property. As will be detailed next, we capitalize on graph node embeddings to address this issue. Step 1: Graph node embeddings. There is local correlation in the node embedding space. In that space, the Euclidean distance between two points is meaningful: it is inversely proportional to the similarity of the two nodes they represent. For instance, two neighboring points in the embedding space might be associated with two nodes very distant in the graph, but playing the same structural role (e.g., of flow control), belonging to the same community, or sharing some other common property. Step 2: Alignment and compression with PCA. As state-of-the-art node embedding techniques (such as node2vec) are neural, they are stochastic. Dimensions are thus recycled from run to run, meaning that a given dimension will not be associated with the same latent concepts across graphs, or across several runs on the same graph. Therefore, to ensure that the embeddings of all the graphs in the collection are comparable, we apply PCA and retain the first d D principal components (where D is the dimensionality of the original node embedding space). PCA also serves an information maximization (compression) purpose. Compression is desirable as it greatly reduces the shape of the tensors fed to the CNN (for reasons that will become clear in what follows), and thus complexity, at the expense of a negligible loss in information. Step 3: Computing and stacking 2D histograms. We finally repeatedly extract 2D slices from the d-dimensional PCA node embedding space, and turn those planes into regular grids by discretizing them into a finite, fixed number of equally-sized bins, where the value associated with each bin is the count of the number of nodes falling into that bin. In other words, we represent a graph as a stack of d/2 2D histograms of its (compressed) node embeddings 3. As illustrated in FIG1, the first histogram is computed from the coordinates of the nodes in the plane made of the first two principal directions, the second histogram from directions 3 and 4, and so forth. Note that using adjacent and following PCA dimensions is an arbitrary choice. It ensures at least that channels are sorted according to the amount of information they contain. Using computer vision vocabulary, bins can be viewed as pixels, and the 2D slices of the embedding space as channels. However, in our case, instead of having 3 channels (R,G,B) like with color images, we have d/2 of them. That is, each pixel (each bin) is associated with a vector of size d/2, whose entries are the counts of the nodes falling into that bin in the corresponding 2D slice of the embedding space. Finally, the resolution of the image is determined by the number of bins of the histograms, which is constant for a given dataset across all dimensions and channels.4 EXPERIMENTAL SETUP We implemented a variant of LeNet-5 with which we reached 99.45% accuracy on the MNIST handwritten digit classification dataset. As illustrated in FIG2 for an input of shape, this simple architecture deploys four convolutional-pooling layers (each repeated twice) in parallel, with respective region sizes of 3, 4, 5 and 6, followed by two fully-connected layers. Dropout BID31 ) is employed for regularization at every hidden layer. The activations are ReLU functions (in that, our model differs from LeNet-5), except for the ultimate layer, which uses a softmax to output a probability distribution over classes. For the convolutionpooling block, we employ 64 filters at the first level, and as the signal is halved through the max pooling layer, the number of filters in the subsequent convolutional layer is increased to 96 to compensate for the loss in resolution. We used node2vec BID14, which applies the very fast Skip-Gram language model BID23 to truncated biased random walks performed on the graph. node2vec scales linearly with the number of nodes in the network. We leveraged the igraph module BID9 and the publicly available high performance C++ implementation 4 of node2vec. We conducted experiments on the publicly available 5 datasets from BID35 which we briefly describe in what follows, and in Table 1. In all datasets, graphs are unweighted, undirected, with unlabeled nodes, and the task is to predict the class they belong to. Classes are mutually exclusive. REDDIT-B, REDDIT-5K, and IMDB-B are perfectly balanced, whereas REDDIT-12K and COLLAB feature a maximum class imbalance ratio of 1:5 and 1:3.4, respectively. Note that graphs with less than 10 nodes were removed because having 5 channels requires at least a 10-dimensional embedding space, which is impossible to obtain with less than 10 nodes. However, this represented only a few graphs per dataset, for some datasets. In all REDDIT datasets, a graph corresponds to a thread where nodes represent users, and there is an edge between two nodes if one of the two users responded to a comment from the other user. More precisely, graphs in REDDIT-B are labeled according to whether they were constructed from Q&A communities or discussion communities, and REDDIT-5K and REDDIT-12K respectively feature graphs from 5 and 11 forums dedicated to specific topics (those forums are known as "subreddits").In COLLAB, graphs are hop-1 neighborhoods of researchers from a scientific collaboration network (two researchers are linked if they co-authored a paper), and are labeled according to the subfield of Physics the corresponding researcher belongs to. Finally, the IMDB-B dataset features hop-1 neighborhoods of actors and actresses selected from two movie collaboration networks corresponding to specific genres (action and romance), in which two actors are linked if they starred in the same movie. Graphs are labeled according to the genre they were sampled from. We refer the reader to the original paper for more information about the datasets. We compared our model to two state-of-the-art graph kernels, the graphlet kernel BID29 ) and the Weisfeiler-Lehman (WL) kernel BID30. The graphlet kernel computes the similarity between two graphs as the cosine of their count vectors. These vectors encode how many subgraphs of size up to a certain threshold can be found in each graph (each entry is an occurrence count). We sampled 2000 graphlets of size up to 6 from each graph. The WL kernel is actually a framework that operates on top of any graph kernel accepting node labels and boosts its performance by using the relabeling procedure of the WL test of isomorphism. More precisely, following the computation of the kernel value between the two graphs, vertex labels are updated based on the labels of their neighbors. This two-step process repeats for a certain number of iterations. The final kernel value is the sum of the values at each iteration. Since our graphs have unlabeled nodes, we set the degrees of the nodes as their labels. Furthermore, we used the WL framework with the subtree graph kernel BID12, as it is very efficient with this kernel BID30. For both baselines, we used a C-SVM classifier 6 BID27. The C parameter of the SVM and the number of iterations in WL were jointly optimized on a 90-10 % partition of the training set of each fold by searching the grid (10 −4, 10 4, len = 10); (2, 7, step = 1). All experiments involved 10-fold cross validation where each fold was repeated 3 times. We used Xavier initialization BID13, a batch size of 32, and for regularization, a dropout rate of 0.3 and early stopping with a patience of 5 epochs. The categorical cross-entropy loss was optimized with Adam BID16 In our initial experiments involving spectral embeddings, the coordinates of any node in any dimension belonged to the [−1, 1] range, due to the eigenvectors being unit-normed. Furthermore, inspired by the MNIST images which are 28 × 28 in size, and on which we initially tested our 2D CNN architecture, we decided to learn 2D histograms featuring 28 bins in each direction. This gave us a resolution of 28 /(1−(−1)), that is, 14 pixels per unit (or simply 14:1). As it was giving good , we stuck to similar values in our final experiments making use of neural embeddings. With the p and q parameters of node2vec held constant and equal to 1, we conducted a search on the coarse grid; to get more insights about the impact of resolution and number of channels (respectively). On a given dataset, image size is calculated as the range |max(coordinates) − min(coordinates)| multiplied by the resolution, where "coordinates" are the node loadings flattened across all dimensions of the embedding space. For instance, on COLLAB with a resolution of 9:1, image size is equal to 37 × 37, since |2.78 − (−1.33)| × 9 ≈ 37. Optimal values for each dataset are summarized in Table 2.With the best resolution and number of channels, we then tuned the return and in-out parameters p and q of node2vec. Those parameters respectively bias the random walks towards exploring larger areas of the graph or staying in local neighborhoods, allowing the embeddings to encode a similarity that interpolates between structural equivalence (two nodes acting as, e.g., flow controllers, are close to each other) and homophily (two nodes belonging to the same community are close to each other). We Table 2: Best resolution, number of channels, and (p, q) for each dataset. The classification accuracy of our approach and the baselines we implemented are reported in TAB2. Even though we did not re-implement those models, we also display for comparison purposes the performance reported in BID35 (Deep Graph Kernels) and BID24 (Graph CNN, PSCN k = 10), since the experimental setting is the same. Our approach shows significantly better than all baselines on the REDDIT-12K and REDDIT-B datasets, with large improvements of 6.81 and 2.82 in accuracy over the best performing competitor, respectively. We also reach best performance on the REDDIT-5K dataset, with an improvement in accuracy of 1.34 over the best performing baseline. However, the difference is not statistically significant. Finally, on the IMDB-B dataset, we get third place, very close (≤ 1.2) to the top performers, and again without statistically significant differences. Actually, the only dataset on which a baseline proved significantly better than our approach is COLLAB (WL graph kernel). Even if not directly comparable, we report in TAB3 kernel matrix computation time for the two graph kernel baselines, along with the time required by our 2D CNN model to perform one pass over the entire training set, i.e., the time per epoch. With respects to time complexity, our method is superior to graph kernels on several counts: first, unlike graph kernels, the time required by the 2D CNN to process one training example is constant (all images for a given dataset have the same size), while computing the kernel value for a pair of graphs depends on their size (polynomial in the number of nodes). It is true that a prerequisite for our approach is an embedding for all the graphs in the dataset, but node2vec scales linearly with the number of nodes in the graph. Therefore, on big graphs, our method is still usable, while graph kernels may not be. Also, node2vec is easily parallelizable over the collection, so one can take advantage of multi-core CPUs to considerably speed up the process. Second, with a 2D CNN, the time necessary to go through the entire training set only increases linearly with the size of the set, while populating the kernel matrix is quadratic, and finding the support vectors is then again at least quadratic. This means that on large datasets, our approach is also preferable to graph kernels. Examples of 2D CNN architectures much more complex than ours applied to millions of images in reasonable time abound in the recent computer vision literature. Processing such big datasets with graph kernels would simply be intractable. In addition, not only do neural models allow processing big datasets, but their performance also tends to significantly improve in the presence of large quantities of training data. Motivated by the outstanding performance recently reached by Convolutional Neural Networks (CNNs) in computer vision, e.g. BID34 BID19, many research efforts have been devoted to generalizing CNNs to graphs. Indeed, CNNs offer a very appealing alternative to kernel-based methods. The parsimony achieved through weight sharing makes them very efficient, their time complexity is constant for each training example and linear with respect to the size of the dataset, and the extra expressiveness they bring might translate to significant accuracy gains. However, since convolution and pooling are natively defined for regular, low-dimensional grids such as images (2D Euclidean space discretized by rectangles), generalizing CNNs to graphs, which are irregular, non-Euclidean objects, is far from trivial. Possible solutions that can be found in the literature fall into two broad categories: spatial and spectral techniques BID5. Spectral approaches BID10 BID17 invoke the convolution theorem from signal processing theory to perform graph convolutions as pointwise multiplications in the Fourier domain of the graph. The basis used to send the graph to the Fourier domain is given by the SVD decomposition of the Laplacian matrix of the graph, whose eigenvalues can be viewed as "frequencies". By contrast, spatial methods BID24 BID33 operate directly on the graph structure. For instance, in BID24, the algorithm first determines the sequence of nodes for which neighborhood graphs (of equal size) are created. To serve as receptive fields, the neighborhood graphs are then normalized, i.e., mapped to a vector space with a linear order, in which nodes with similar structural roles in the neighborhood graphs are close to each other. Normalization is the central step, and is performed via a labeling procedure. A 1D CNN architecture is finally applied to the receptive fields. While the aforementioned sophisticated frameworks have made great strides, we showed in this paper that graphs can also be processed by vanilla 2D CNN architectures. This is made possible by the novel graph representation we introduce, which encodes graphs as stacks of 2D histograms of their node embeddings. Compared to the more complex approaches that involve different fundamental architectural and/or operational modifications, the main advantage of our method is its simplicity. Crucially, we show that this simplicity can be obtained without giving up accuracy: we indeed outperform graph CNN baselines by a wide margin on some datasets, and are very close elsewhere. Replacing the raw counts by the empirical joint probability density function, either by normalizing the histograms, or with a Kernel Density Estimate, significantly deteriorated performance. This suggests that keeping the absolute values of the counts is important, which makes sense, because some categories might be associated with larger or smaller graphs, on average. Therefore, preventing the model from using size information is likely to decrease accuracy. We also observed that increasing the number of channels to more than 5 does not yield better (which makes sense, as channels contain less and less information), but that reducing this number improves performance in some cases, probably because it plays a regularization role. The main contribution of our study is a novel method for representing graphs as multi-channel image-like structures from their node embeddings, that allows them to be processed by 2D CNNs. How the embeddings are computed, and which 2D CNN architecture is used, does not matter. We hold this flexibility to be a major strength. First, the embedding-agnostic nature of our method means that it can be seamlessly extended to directed, weighted, or labeled graphs with continuous or categorical node/edge attributes, simply by using an embedding algorithm that accepts such graphs, e.g., BID21. The independence of our approach with respect to the image classification model used is another advantage. Here, we employed a vanilla 2D CNN architecture as it was offering an excellent trade-off between accuracy and simplicity, but more recent models, such as the one of BID15, may yield even better . Above all, performance should improve as graph node embedding algorithms and CNN architectures for images improve in the future. Even though are very good out-of-the-box in most cases, finding an embedding algorithm that works well, or the right combination of parameters for a given dataset, can require some efforts. For instance, on COLLAB, we hypothesize that our are inferior to that observed on the other datasets because optimizing p and q for COLLAB may require more than a coarse grid search, or because node2vec may not be well-suited to very dense graphs such as the ones found in COLLAB. The main contribution of this paper is to show that CNN architectures designed for images can be used for graph processing in a completely off-the-shelf manner, simply by representing graphs as stacks of two-dimensional histograms of their node embeddings. Despite the simplicity of our approach, indicate that it is very competitive to state-of-the-art graph kernels and graph CNN models, sometimes outperforming them by a wide margin. Furthermore, these good were obtained with limited parameter tuning and by using a basic 2D CNN model. From a time complexity perspective, our approach is preferable to graph kernels too, allowing to process larger datasets featuring bigger graphs.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkOhuyA6-
We introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs.
The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies. However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited. Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their'time-series expressive power' arises. In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales. To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank. Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence. We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts. Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all. Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks. We obtain our by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which merge the hidden state with the input via the Multiplicative Integration operation. Over the past few years, Recurrent Neural Networks (RNNs) have become the prominent machine learning architecture for modeling sequential data, having been successfully employed for language modeling , neural machine translation , speech recognition (; BID1, and more. The success of recurrent networks in learning complex functional dependencies for sequences of varying lengths, readily implies that long-term and elaborate correlations in the given inputs are somehow supported by these networks. However, formal understanding of the influence of a recurrent network's structure on its expressiveness, and specifically on its ever-improving ability to integrate data throughout time (e.g. translating long sentences, answering elaborate questions), is lacking. An ongoing empirical effort to successfully apply recurrent networks to tasks of increasing complexity and temporal extent, includes augmentations of the recurrent unit such as Long Short Term Memory (LSTM) networks and their variants (e.g.). A parallel avenue, which we focus on in this paper, includes the stacking of layers to form deep recurrent networks . Deep recurrent networks, which exhibit empirical superiority over shallow ones (see e.g.), implement hierarchical processing of information at every time-step that accompanies their inherent time-advancing computation. Evidence for a time-scale related effect arises from experiments -deep recurrent networks appear to model correlations which correspond to longer time-scales than shallow ones. These findings, which imply that depth brings forth a considerable advantage in complexity and in temporal capacity of recurrent networks, have no adequate theoretical explanation. In this paper, we address the above presented issues. Based on the relative maturity of depth efficiency in neural networks, namely that show that deep networks efficiently express functions that would require shallow ones to have a super-polynomial size (e.g. ;), it is natural to assume that depth has a similar effect on the expressiveness of recurrent networks. Indeed, we show that depth efficiency holds for recurrent networks. However, the distinguishing attribute of recurrent networks, is their inherent ability to cope with varying input sequence length. Thus, once establishing the above depth efficiency in recurrent networks, a basic question arises, which relates to the apparent depth enhanced long-term memory in recurrent networks: Do the functions which are efficiently expressed by deep recurrent networks correspond to dependencies over longer time-scales? We answer this question, by showing that depth provides an exponential boost to the ability of recurrent networks to model long-term dependencies. In order to take-on the above question, we introduce in section 2 a recurrent network referred to as a recurrent arithmetic circuit (RAC) that shares the architectural features of RNNs, and differs from them in the type of non-linearity used in the calculation. This type of connection between state-of-the-art machine learning algorithms and arithmetic circuits (also known as Sum-Product Networks ) has well-established precedence in the context of neural networks. prove a depth efficiency on such networks, and theoretically analyze the class of Convolutional Arithmetic Circuits which differ from common ConvNets in the exact same fashion in which RACs differ from more standard RNNs. Conclusions drawn from such analyses were empirically shown to extend to common ConvNets (e.g. ;). Beyond their connection to theoretical models, the modification which defines RACs resembles that of Multiplicative RNNs and of Multiplicative Integration networks , which provide a substantial performance boost over many of the existing RNN models. In order to obtain our , we make a connection between RACs and the Tensor Train (TT) decomposition , which suggests that Multiplicative RNNs may be related to a generalized TT-decomposition, similar to the way connected ReLU ConvNets to generalized tensor decompositions. We move on to introduce in section 3 the notion of Start-End separation rank as a measure of the recurrent network's ability to model elaborate long-term dependencies. In order to analyze the longterm correlations of a function over a sequential input which extends T time-steps, we partition the inputs to those which arrive at the first T /2 time-steps ("Start") and the last T /2 time-steps ("End"), and ask how far the function realized by the recurrent network is from being separable w.r.t. this partition. Distance from separability is measured through the notion of separation rank , which can be viewed as a surrogate of the L 2 distance from the closest separable function. For a given function, high Start-End separation rank implies that the function induces strong correlation between the beginning and end of the input sequence, and vice versa. In section 4 we directly address the depth enhanced long-term memory question above, by examining depth L = 2 RACs and proving that functions realized by these deep networks enjoy Start-End separation ranks that are exponentially higher than those of shallow networks, implying that indeed these functions can model more elaborate input dependencies over longer periods of time. An additional reinforcing is that the Start-End separation rank of the deep recurrent network grows exponentially with the sequence length, while that of the shallow recurrent network is independent of the sequence length. Informally, this implies that vanilla shallow recurrent networks are inadequate in modeling correlations of long input sequences, since in contrast to the case of deep recurrent networks, the modeled dependencies achievable by shallow ones do not adapt to the actual length of the input. Finally, we present and motivate a quantitative conjecture by which the Start-End separation rank of recurrent networks grows exponentially with the network depth. A proof of this conjecture, which will provide an even deeper insight regarding the advantages of depth in recurrent networks, is left as an open problem. In this section, we introduce a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which shares the architectural features of standard RNNs. As demonstrated below, Figure 1: Shallow and deep recurrent networks, as described by eqs. 1 and 4, respectively. the operation of RACs on sequential data is identical to the operation of RNNs, where a hidden state mixes information from previous time-steps with new incoming data (see fig. 1). The two classes differ only in the type of non-linearity used in the calculation, as described by eqs. 1-3. In the following sections, we utilize the algebraic properties of RACs for proving regarding their ability to model long-term dependencies of their inputs. We present below the basic framework of shallow recurrent networks (fig. 1(a) ), which describes both the common RNNs and the newly introduced RACs. A recurrent network is a network that models a discrete-time dynamical system; we focus on an example of a sequence to sequence classification task into one of the categories {1, ..., C} ≡ [C]. Denoting the temporal dependence by t, the sequential input to the network is {x t ∈ X} T t=1, and the output is a sequence of class scores vectors DISPLAYFORM0, where L is the network depth, Θ denotes the parameters of the recurrent network, and T represents the extent of the sequence in time-steps. We assume the input lies in some input space X that may be discrete (e.g. text data) or continuous (e.g. audio data), and that some initial mapping f: X → R M is preformed on the input, so that all input types are mapped to vectors f (x t) ∈ R M. The function f (·) may be viewed as an encoding, e.g. words to vectors or images to a final dense layer via some trained ConvNet. The output at time t ∈ [T] of the shallow (depth L = 1) recurrent network with R hidden channels, depicted in fig. 1(a), is given by: DISPLAYFORM1 DISPLAYFORM2 where h t ∈ R R is the hidden state of the network at time t (h 0 is some initial hidden state), Θ denotes the learned parameters DISPLAYFORM3, which are the input, hidden and output weights matrices respectively, and g is some non-linear operation. A bias term is usually added to eq. 1, however, because it bears no effect on our analysis, we omit it for simplicity. For common RNNs, the non-linearity is given by: DISPLAYFORM4 where σ(·) is typically some point-wise non-linearity such as sigmoid, tanh etc. For the newly introduced class of RACs, g is given by: DISPLAYFORM5 where the operation stands for element-wise multiplication between vectors, for which the ant vector upholds (a b) i = a i · b i. This form of merging the input and the hidden state by multiplication rather than addition is referred to as Multiplicative Integration .The extension to deep recurrent networks is natural, and we follow the common approach (see e.g.) where each layer acts as a recurrent network which receives the hidden state of the previous layer as its input. The output at time t of the depth L recurrent network with R hidden channels in each layer, 1 depicted in fig. 1(b), is constructed by the following: DISPLAYFORM6 where h t,l ∈ R R is the state of the depth l hidden unit at time t (h 0,l is some initial hidden state per layer), and Θ denotes the learned parameters. Specifically, DISPLAYFORM7 are the input and hidden weights matrices at depth l, respectively. For l = 1, the weights matrix which multiplies the inputs vector has the appropriate dimensions: W I,1 ∈ R R×M. The output weights matrix is W O ∈ R C×R as in the shallow case, representing a final calculation of the scores for all classes 1 through C at every time-step. The non-linear operation g determines the type of the deep recurrent network, where a common deep RNN is obtained by choosing g = g RNN (eq. 2), and a deep RAC is obtained for g = g RAC (eq. 3).We consider the newly presented class of RACs to be a good surrogate of common RNNs. Firstly, there is an obvious structural resemblance between the two classes, as the recurrent aspect of the calculation has the exact same form in both networks (fig. 1). In fact, recurrent networks that include Multiplicative Integration similarly to RACs, have been shown to outperform many of the existing RNN models . Secondly, as mentioned above, arithmetic circuits have been successfully used as surrogates of convolutional networks. The fact that laid the foundation for extending the proof methodologies of convolutional arithmetic circuits to common ConvNets with ReLU activations, suggests that such adaptations may be made in the recurrent network analog, rendering the newly proposed class of recurrent networks all the more interesting. In the following sections, we make use of the algebraic properties of RACs in order to obtain clear-cut observations regarding the benefits of depth in recurrent networks. In this section, we establish means for quantifying the ability of recurrent networks to model longterm temporal dependencies in the sequential input data. We begin by introducing the Start-End separation-rank of the function realized by a recurrent network as a measure of the amount of information flow across time that can be supported by the network. We then tie the Start-End separation rank to the algebraic concept of grid tensors , which will allow us to employ tools and from tensorial analysis in order to show that depth provides an exponential boost to the ability of recurrent networks to model elaborate long-term temporal dependencies. We define below the concept of the Start-End separation rank for functions realized by recurrent networks after T time-steps, i.e. real functions that take as input X = (x 1, . . ., x T) ∈ X T. The separation rank quantifies a function's distance from separability with respect to two disjoint subsets of its inputs. Specifically, let (S, E) be a partition of input indices, such that S = {1, . . ., T /2} and E = {T /2 + 1, . . ., T} (we consider even values of T throughout the paper for convenience of presentation). This implies that {x s} s∈S are the first T /2 ("Start") inputs to the network, and {x e} e∈E are the last T /2 ("End") inputs to the network. For a function y: X T → R, the Start-End separation rank is defined as follows: DISPLAYFORM0 DISPLAYFORM1 In words, it is the minimal number of summands that together give y, where each summand is separable w.r.t. (S, E), i.e. is equal to a product of two functions -one that intakes only inputs from the first T /2 time-steps, and another that intakes only inputs from the last T /2 time-steps. The separation rank w.r.t. a general partition of the inputs was introduced in for high-dimensional numerical analysis, and was employed for various applications, e.g. chemistry , particle engineering , and machine learning . connect the separation rank to the L 2 distance of the function from the set of separable functions, and use it to measure correlations modeled by deep convolutional networks. tie the separation rank to the family of quantum entanglement measures, which quantify correlations in many-body quantum systems. In our context, if the Start-End separation rank of a function realized by a recurrent network is equal to 1, then the function is separable, meaning it cannot model any interaction between the inputs which arrive at the beginning of the sequence and the inputs that follow later, towards the end of the sequence. Specifically, if sep (S,E) (y) = 1 then there exist g s: X T /2 → R and DISPLAYFORM2, and the function y cannot take into account consistency between the values of {x 1, . . ., x T /2} and those of {x T /2+1, . . ., x T}. In a statistical setting, if y were a probability density function, this would imply that {x 1, . . ., x T /2} and {x T /2+1, . . ., x T} are statistically independent. The higher sep (S,E) (y) is, the farther y is from this situation, i.e. the more it models dependency between the beginning and the end of the inputs sequence. Stated differently, if the recurrent network's architecture restricts the hypothesis space to functions with low Start-End separation ranks, a more elaborate long-term temporal dependence, which corresponds to a function with a higher Start-End separation rank, cannot be learned. In section 4 we show that deep RACs support Start-End separations ranks which are exponentially larger than those supported by shallow RACs, and are therefore much better fit to model long-term temporal dependencies. To this end, we employ in the following sub-section the algebraic tool of grid tensors that will allow us to evaluate the Start-End separation ranks of deep and shallow RACs. We begin by laying out basic concepts in tensor theory required for the upcoming analysis. The core concept of a tensor may be thought of as a multi-dimensional array. The order of a tensor is defined to be the number of indexing entries in the array, referred to as modes. The dimension of a tensor in a particular mode is defined as the number of values taken by the index in that mode. If A is a tensor of order T and dimension M i in each mode i ∈ [T], its entries are denoted A d1...d T, where the index in each mode takes values d i ∈ [M i]. A fundamental operator in tensor analysis is the tensor product, which we denote by ⊗. It is an operator that intakes two tensors A ∈ R M1×···×M P and B ∈ R M P +1 ×···×M P +Q, and returns a tensor A ⊗ B ∈ R M1×···×M P +Q defined by: DISPLAYFORM0 An additional concept we will make use of is the matricization of A w.r.t. the partition (S, E), denoted A S,E, which is essentially the arrangement of the tensor elements as a matrix whose rows correspond to S and columns to E (formally presented in appendix C).We consider the function realized by a shallow RAC with R hidden channels, which computes the score of class c ∈ [C] at time T. This function, which is given by a recursive definition in eqs. 1 and 3, can be alternatively written in the following closed form: DISPLAYFORM1 where the order T tensor A T,1,Θ c, which lies at the heart of the above expression, is referred to as the shallow RAC weights tensor, since its entries are polynomials in the network weights Θ. Specifically, denoting the rows of the input weights matrix, W I, by a I,α ∈ R M (or element-wise: a DISPLAYFORM2 DISPLAYFORM3, the shallow RAC weights tensor can be gradually constructed in the following fashion: DISPLAYFORM4 having set h 0 = W H † 1, where † is the pseudoinverse operation. In the above equation, the tensor products, which appear inside the sums, are directly related to the Multiplicative Integration property of RACs (eq. 3). The sums originate in the multiplication of the hidden states vector by the hidden weights matrix at every time-step (eq. 1). The construction of the shallow RAC weights tensor, presented in eq. 7, is referred to as a Tensor Train (TT) decomposition of TT-rank R in the tensor analysis community and is analogously described by a Matrix Product State (MPS) Tensor Network (see Orús ) in the quantum physics community. See appendix A for the Tensor Networks construction of deep and shallow RACs, which provides graphical insight regarding the exponential complexity brought forth by depth in recurrent networks. We now present the concept of grid tensors, which are a form of function discretization. Essentially, the function is evaluated for a set of points on an exponentially large grid in the input space and the outcomes are stored in a tensor. Formally, fixing a set of template vectors x,..., x (M) ∈ X, the points on the grid are the set {( DISPLAYFORM5, the set of its values on the grid arranged in the form of a tensor are called the grid tensor induced DISPLAYFORM6 The grid tensors of functions realized by recurrent networks, will allow us to calculate their separations ranks and establish definitive regarding the benefits of depth these networks. Having presented the tensorial structure of the function realized by a shallow RAC, as given by eqs. 6 and 7 above, we are now in a position to tie its Start-End separation rank to its grid tensor, as formulated in the following claim: be its shallow RAC weights tensor, constructed according to eq. 7. Assume that the network's initial mapping functions DISPLAYFORM7 DISPLAYFORM8 are linearly independent, and that they, as well as the functions g ν, g ν in the definition of Start-End separation rank (eq. 5), are measurable and squareintegrable.2 Then, there exist template vectors x,..., x (M) ∈ X such that the following holds: DISPLAYFORM9 where A(y DISPLAYFORM10) is the grid tensor of y DISPLAYFORM11 with respect to the above template vectors. Proof. See appendix B.1.The above claim establishes an equality between the Start-End separation rank and the rank of the matrix obtained by the corresponding grid tensor matricization, denoted A(y T,1,Θ c) S,E, with respect to a specific set of template vectors. Note that the limitation to specific template vectors does not restrict our , as grid tensors are merely a tool used to bound the separation rank. The additional equality to the rank of the matrix obtained by matricizing the shallow RAC weights tensor, will be of use to us when proving our main below (theorem 1).Due to the inherent use of data duplication in the computation preformed by a deep RAC (see appendix A.3 for further details), it cannot be written in a closed tensorial form similar to that of eq. 6. This in turn implies that the equality shown in claim 1 does not hold for functions realized by deep RACs. The following claim introduces a fundamental relation between a function's StartEnd separation rank and the rank of the matrix obtained by the corresponding matricization. This relation, which holds for all functions, is formulated below for functions realized by deep RACs:,..., x (M) ∈ X it holds that: DISPLAYFORM12 DISPLAYFORM13 where A(y DISPLAYFORM14) is the grid tensor of y T,L,Θ c with respect to the above template vectors. Proof. See appendix B.2.Claim 2 will allow us to provide a lower bound on the Start-End separation rank of functions realized by deep RACs, which we show to be exponentially higher than the Start-End separation rank of functions realized by shallow RACs (to be obtained via claim 1). Thus, in the next section, we employ the above presented tools to show that an exponential enhancement of the Start-End separation rank is brought forth by depth in recurrent networks. In this section, we present the main theoretical contributions of this paper. In section 4.1, we formally present a which exponentially separates between the memory capacity of a deep (L = 2) recurrent network and a shallow (L = 1) one. Following the formal presentation of in theorem 1, we discuss some of their implications and then conclude by sketching a proof outline for the theorem (full proof is relegated to appendix B.3). In section 4.2, we present a quantitative conjecture regarding the enhanced memory capacity of deep recurrent networks of general depth L, which relies on the inherent combinatorial properties of the recurrent network's computation. We leave the formal proof of this conjecture for future work.4.1 SEPARATING BETWEEN SHALLOW AND DEEP RECURRENT NETWORKS Theorem 1 states, that the correlations modeled between the beginning and end of the input sequence to a recurrent network, as measured by the Start-End separation rank (see section 3.1), can be exponentially more complex for deep networks than for shallow ones: DISPLAYFORM0 be the function computing the output after T time-steps of an RAC with L layers, R hidden channels per layer, weights denoted by Θ, and initial hidden states DISPLAYFORM1 Assume that the network's initial mapping functions DISPLAYFORM2 be the Start-End separation rank of y T,L,Θ c (eq. 5). Then, the following holds almost everywhere, i.e. for all values of Θ×h 0,l but a set of Lebesgue measure zero: DISPLAYFORM3 is the multiset coefficient, given in the binomial form by DISPLAYFORM4 The above theorem readily implies that depth entails an enhanced ability of recurrent networks to model long-term temporal dependencies in the sequential input. Specifically, theorem 1 indicates depth efficiency -it ensures us that upon randomizing the weights of a deep RAC with R hidden channels per layer, with probability 1 the function realized by it after T time-steps may only be realized by a shallow RAC with a number of hidden channels that is exponentially large. 3 Stated alternatively, this means that almost all functional dependencies which lie in the hypothesis space of deep RACs with R hidden channels per layer, calculated after T time-steps, are inaccessible to shallow RACs with less than an exponential number of hidden channels. Thus, a shallow recurrent network would require exponentially more parameters than a deep recurrent network, if it is to implement the same function. The established role of the Start-End separation rank as a correlation measure between the beginning and the end of the sequence (see section 3.1), implies that these functions, which are realized by almost any deep network and can never be realized by a shallow network of a reasonable size, represent more elaborate correlations over longer periods of time. The above notion is strengthened by the fact that the Start-End separation rank of deep RACs increases with the sequence length T, while the Start-End separation rank of shallow RACs is independent of it. This indicates that shallow recurrent networks are much more restricted in modeling long-term correlations than the deep ones, which enjoy an exponentially increasing Start-End separation rank as time progresses. Below, we present an outline of the proof for theorem 1 (see appendix B.3 for the full version):Proof sketch of theorem 1.1. For a shallow network, claim 1 establishes that the Start-End separation rank of the function realized by a shallow (L = 1) RAC is equal to the rank of the matrix obtained by matricizing the corresponding shallow RAC weights tensor (eq. 6) according to the StartEnd partition: DISPLAYFORM5 ) S,E. Thus, it suffices to prove that rank A T,1,Θ c ) S,E = R in order to satisfy bullet of the theorem, as the rank is trivially upper-bounded by the dimension of the matrix, M T /2. To this end, we call upon the TT-decomposition of A T,1,Θ c, given by eq. 7, which corresponds to the MPS Tensor Network presented in appendix A. We rely on a recent by , who 3 The combinatorial coefficient DISPLAYFORM6 is exponentially dependent onR ≡ min{M, R}: for T > 2 * (R − 1) this value is larger than state that the rank of the matrix obtained by matricizing any tensor according to a partition (S, E), is equal to a min-cut separating S from E in the Tensor Network graph representing this tensor. The required equality follows from the fact that the TT-decomposition in eq. 7 is of TT-rank R, which in turn implies that the min-cut in the appropriate Tensor Network graph is equal to R.2. For a deep network, claim 2 assures us that the Start-End separation rank of the function realized by a depth L = 2 RAC is lower bounded by the rank of the matrix obtained by the corresponding grid tensor matricization: DISPLAYFORM7 for all of the values of parameters Θ × h 0,l but a set of Lebesgue measure zero, would satisfy the theorem, and again, the rank is trivially upper-bounded by the dimension of the matrix, M T /2. We use a lemma proved in , which states that since the entries of A(y T,L,Θ c) are polynomials in the deep recurrent network's weights, it suffices to find a single example for which the rank of the matricized grid tensor is greater than the desired lower bound. Finding such an example would indeed imply that for almost all of the values of the network parameters, the desired inequality holds. We choose a weight assignment such that the ing matricized grid tensor resembles a matrix obtained by raising a rank-R ≡ min{M, R} matrix to the Hadamard power of degree T /2. This operation, which raises each element of the original rank-R matrix to the power of T /2, was shown to yield a matrix with a rank upper-bounded by the multiset coefficient DISPLAYFORM8 (see e.g. BID0). We show that our assignment in a matricized grid tensor with a rank which is not only upper-bounded by this value, but actually achieves it. Theorem 1 provides a lower bound of DISPLAYFORM0 on the Start-End separation rank of depth L = 2 recurrent networks, exponentially separating deep recurrent networks from shallow ones. By a trivial assignment of weights in higher layers, the Start-End separation rank of even deeper recurrent networks (L > 2) is also lower-bounded by this expression, which does not depend on L. In the following, we conjecture that a tighter lower bound holds for networks of depth L > 2, the form of which implies that the memory capacity of deep recurrent networks grows exponentially with the network depth: Conjecture 1. Under the same conditions as in theorem 1, for all values of Θ × h 0,l but a set of Lebesgue measure zero, it holds for any L that: DISPLAYFORM1 We motivate conjecture 1 by investigating the combinatorial nature of the computation performed by a deep RAC. By constructing Tensor Networks which correspond to deep RACs, we attain an informative visualization of this combinatorial perspective. In appendix A, we provide full details of this construction and present the formal motivation for the conjecture. Below, we qualitatively outline this combinatorial approach. A Tensor Network is essentially a graphical tool for representing algebraic operations which resemble multiplications of vectors and matrices, between higher order tensors. FIG3 shows an example of the Tensor Network representing the computation of a depth L = 3 RAC after T = 6 time-steps. This well-defined computation graph hosts the values of the weight matrices at its nodes. The inputs {x 1, . . ., x T} are marked by their corresponding time-step {1, . . ., T}, and are integrated in a depth dependent and time-advancing manner (see further discussion regarding this form in appendix A.3), as portrayed in the example of FIG3. We highlight in red the basic unit in the Tensor Network which connects "Start" inputs {1, . . ., T /2} and "End" inputs {T /2+1, . . ., T}. In order to estimate a lower bound on the Start-End separation rank of a depth L > 2 recurrent network, we employ a similar strategy to that presented in the proof sketch of the L = 2 case (see section 4.1). Specifically, we rely on the fact that it is sufficient to find a specific instance of the network parameters Θ × h 0,l for which A(y T,L,Θ c) S,E achieves a certain rank, in order for this rank to bound the Start-End separation rank of the network from below. Indeed, we find a specific assignment of the network weights, presented in appendix A.4, for which the Tensor Network effectively takes the form of the basic unit connecting "Start" and "End", raised to the power of the number of its repetitions in the graph (bottom of FIG3). This basic unit corresponds to a simple computation represented by a grid tensor with Start-End matricization of rank R. Raising such a matrix to the Hadamard power of any p ∈ Z, in a matrix with a rank upper bounded by DISPLAYFORM2, and the challenge of proving the conjecture amounts to proving that the upper bound is tight in this case. In appendix A.4, we prove that the number of repetitions of the basic unit connecting "Start" and "End" in the deep RAC Tensor Network graph, is exactly equal to DISPLAYFORM3 for any depth L. For example, in the T = 6, L = 3 network illustrated in FIG3, the number of repetitions indeed corresponds to p = 3 2 = 6. It is noteworthy that for L = 1, 2 the bound in conjecture 1 coincides with the bounds that were proved for these depths in theorem 1.Conjecture 1 indicates that beyond the proved exponential advantage in memory capacity of deep networks over shallow ones, a further exponential separation may be shown between recurrent networks of different depths. We leave the proof of this , which can reinforce and refine the understanding of advantages brought forth by depth in recurrent networks, as an open problem. The notion of depth efficiency, by which deep networks efficiently express functions that would require shallow networks to have a super-polynomial size, is well established in the context of convolutional networks. However, recurrent networks differ from convolutional networks, as they are suited by design to tackle inputs of varying lengths. Accordingly, depth efficiency alone does not account for the remarkable performance of recurrent networks on long input sequences. In this paper, we identified a fundamental need for a quantifier of'time-series expressivity', quantifying the memory capacity of recurrent networks. In order to meet this need, we proposed a measure of the ability of recurrent networks to model long-term temporal dependencies, in the form of the Start-End separation rank. The separation rank was used to quantify correlations in convolutional networks, and has roots in the field of quantum physics. The proposed measure adjusts itself to the temporal extent of the input series, and quantifies the ability of the recurrent network to correlate the incoming sequential data as time progresses. We analyzed the class of Recurrent Arithmetic Circuits, which are closely related to successful RNN architectures, and proved that the Start-End separation rank of deep RACs increases exponentially as the input sequence extends, while that of shallow RACs is independent of the input length. These , which demonstrate that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, were achieved by combining tools from the fields of measure theory, tensorial analysis, combinatorics, graph theory and quantum physics. Such analyses may be readily extended to other architectural features employed in modern recurrent networks. Indeed, the same time-series expressivity question may now be applied to the different variants of LSTM networks, and the proposed notion of Start-End separation rank may be employed for quantifying their memory capacity. We have demonstrated that such a treatment can go beyond unveiling the origins of the success of a certain architectural choice, and leads to new insights. The above established observation that correlations achievable by vanilla shallow recurrent network do not adapt at all to the sequence length, is an exemplar of this potential. Moreover, practical recipes may emerge by such theoretical analyses. The experiments preformed in , suggest that shallow layers of recurrent networks are related to short time-scales, e.g. in speech: phonemes, syllables, words, while deeper layers appear to support correlations of longer time-scales, e.g. full sentences, elaborate questions. These findings open the door to further depth related investigations in recurrent networks, and specifically the role of each layer in modeling temporal correlations may be better understood. establish theoretical observations which translate into practical regarding the number of hidden channels to be chosen for each layer in a deep convolutional network. The conjecture presented in this paper, by which the Start-End separation rank of recurrent networks grows exponentially with depth, can similarly entail practical recipes for enhancing their memory capacity. Such analyses can be reinforced by experiments, and lead to a profound understanding of the contribution of deep layers to the recurrent network's memory. Indeed, we view this work as an important step towards novel methods of matching the recurrent network architecture to the temporal correlations in a given sequential data set. We begin in section A.1 by providing a brief introduction to TNs. Next, we present in section A.2 the TN which corresponds to the calculation of a shallow RAC, and tie it to a common TN architecture referred to as a Matrix Product State (MPS) (see overview in e.g. Orús), and equivalently to the tensor train (TT) decomposition . Subsequently, we present in section A.3 a TN construction of a deep RAC, and emphasize the characteristics of this construction that are the origin of the enhanced ability of deep RACs to model elaborate temporal dependencies. Finally, in section A.4, we make use of the above TNs construction in order to formally motivate conjecture 1, according to which the Start-End separation rank of RACs grows exponentially with depth. A TN is a weighted graph, where each node corresponds to a tensor whose order is equal to the degree of the node in the graph. Accordingly, the edges emanating out of a node, also referred to as its legs, represent the different modes of the corresponding tensor. The weight of each edge in the graph, also referred to as its bond dimension, is equal to the dimension of the appropriate tensor mode. In accordance with the relation between mode, dimension and index of a tensor presented in section 3.2, each edge in a TN is represented by an index that runs between 1 and its bond dimension. FIG4 shows three examples: A vector, which is a tensor of order 1, is represented by a node with one leg. A matrix, which is a tensor of order 2, is represented by a node with two legs. Accordingly, a tensor of order N is represented in the TN as a node with N legs. We move on to present the connectivity properties of a TN. Edges which connect two nodes in the TN represent an operation between the two corresponding tensors. A index which represents such an edge is called a contracted index, and the operation of contracting that index is in fact a summation over all of the values it can take. An index representing an edge with one loose end is called an open index. The tensor represented by the entire TN, whose order is equal to the number of open indices, can be calculated by summing over all of the contracted indices in the network. An example for a contraction of a simple TN is depicted in FIG4. There, a TN corresponding to the operation of multiplying a vector v ∈ R r 1 by a matrix M ∈ R r 2 ×r 1 is performed by summing over the only contracted index, k. As there is only one open index, d, the of contracting the network is an order 1 tensor (a vector): u ∈ R r 2 which upholds u = M v. Though we use below the contraction of indices in more elaborate TNs, this operation can be essentially viewed as a generalization of matrix multiplication. The computation of the output at time T that is preformed by the shallow recurrent network given by eqs. 1 and 3, or alternatively by eqs. 6 and 7, can be written in terms of a TN. FIG5 shows this TN, which given some initial hidden state h0, is essentially a temporal concatenation of a unit cell that preforms a similar computation at every time-step, as depicted in FIG5. For any time t < T, this unit cell is composed of the input weights matrix, W I, contracted with the inputs vector, f (x t), and the hidden weights matrix, W H, contracted with the hidden state vector of the previous time-step, h t−1. The final component in each unit cell is the 3 legged triangle representing the order 3 tensor δ ∈ R R×R×R, referred to as the δ tensor, defined by: DISPLAYFORM0 with ij ∈ [R] ∀j ∈, i.e. its entries are equal to 1 only on the super-diagonal and are zero otherwise. The use of a triangular node in the TN is intended to remind the reader of the restriction given in eq. 10. The recursive relation that is defined by the unit cell, is given by the TN in FIG5 (b): DISPLAYFORM1 where kt ∈ [R]. In the first equality, we simply follow the TN prescription and write a summation over all of the contracted indices in the left hand side of FIG5, in the second equality we use the definition of matrix multiplication, and in the last equality we use the definition of the δ tensor. The component-wise equality of eq. 11 readily implies h t = (W H h t−1) (W I f (x t)), reproducing the recursive relation in eqs. 1 and 3, which defines the operation of the shallow RAC. From the above treatment, it is evident that the restricted δ tensor is in fact the component in the TN that yields the element-wise multiplication property. After T repetitions of the unit cell calculation with the sequential input {x t} T t=1, a final multiplication of the hidden state vector h T by the output weights matrix W O yields the output vector y T,1,Θ.The tensor network which represents the order T shallow RAC weights tensor A, which appears in eqs. 6 and 7, is given by the TN in the upper part of FIG5. In FIG5, we show that by a simple contraction of indices, the TN representing the shallow RAC weights tensor A T,1,Θ c can be drawn in the form of a standard MPS TN. This TN allows the representation of an order T tensor with a linear (in T) amount of parameters, rather than the regular exponential amount (A has M T entries). The decomposition which corresponds to this TN is known as the Tensor Train (TT) decomposition of rank R in the tensor analysis community, its explicit form given in eq. 7.The presentation of the shallow recurrent network in terms of a TN allows the employment of the min-cut analysis, which was introduced by in the context of convolutional networks, for quantification of the information flow across time modeled by the shallow recurrent network. This was indeed preformed in our proof of the shallow case of theorem 1. We now move on to present the computation preformed by a deep recurrent network in the language of TNs. The construction of a TN which matches the calculation of a deep recurrent network is far less trivial than that of the shallow case, due to the seemingly innocent property of reusing information which lies at the heart of the calculation of deep recurrent networks. Specifically, all of the hidden states of the network are reused, since the state of each layer at every time-step is duplicated and sent as an input to the calculation of the same layer in the next time-step, and also as an input to the next layer up in the same time-step (see fig. 1(b) ). The required operation of duplicating a vector and sending it to be part of two different calculations, which is simply achieved in any practical setting, is actually impossible to represent in the framework of TNs. We formulate this notion in the following claim:Claim 3. Let v ∈ R P, P ∈ N be a vector. v is represented by a node with one leg in the TN notation. The operation of duplicating this node, i.e. forming two separate nodes of degree 1, each equal to v, cannot be achieved by any TN.Proof. We assume by contradiction that there exists a Tensor Network φ which operates on any vector v ∈ R P and clones it to two separate nodes of degree 1, each equal to v, to form an overall TN representing v ⊗ v. Component wise, this implies that φ upholds ∀v ∈ R P: DISPLAYFORM0, meaning that ∀α ∈ [P]: DISPLAYFORM1 By definition of the standard basis elements, the left hand side of eq. 12 takes the form φ αjk while the right hand side equals 1 only if j = k = α, and otherwise 0. Utilizing the δ-tensor notation presented in eq. 10, in order to successfully clone the standard basis elements, eq. 12 implies that φ must uphold φ αjk = δ αjk. However, for v = 1, i.e. ∀j ∈ [P]: vj = 1, a cloning operation does not take place when using this value of φ, since DISPLAYFORM2 Claim 3 seems to pose a hurdle in our pursuit of a TN representing a deep recurrent network. Nonetheless, a form of such a TN may be attained by a simple'trick' -in order to model the duplication that is inherently present in the deep recurrent network computation, we resort to duplicating the input data itself. By this technique, for every duplication that takes place along the calculation, the input is inserted into the TN multiple times, once for each sequence that leads to the duplication point. This principle, which allows us to circumvent the restriction imposed by claim 3, yields the elaborate TN construction of deep RACs depicted in FIG6.It is important to note that these TNs, which grow exponentially in size as the depth L of the recurrent network represented by them increases, are merely a theoretical tool for analysis and not a suggested implementation scheme for deep recurrent networks. The actual deep recurrent network is constructed according to the simple scheme given in fig. 1(b), which grows only linearly in size as the depth L increases, despite the corresponding TN growing exponentially. In fact, this exponential'blow-up' in the size of the TNs representing the deep recurrent networks is closely related to their ability to model more intricate correlations over longer periods of time in comparison with their shallower counterparts, which was established in section 4. FIG6, a depth L = 2 recurrent network is presented, spread out in time onto T = 4 time-steps. To understand the logic underlying the input duplication process, which in turn entails duplication of entire segments of the TN, we focus on the calculation of the hidden state vector h 2,2 that is presented in FIG6. When the first inputs vector, f (x 1), is inserted into the network, it is multiplied by W I,1 and the outcome is equal to h 1,1. 4 Next, h 1,1 is used in two different places, as an inputs vector to layer L = 2 at time t = 1, and as a hidden state vector in layer L = 1 for time t = 2 calculation. Our input duplication technique inserts f (x 1) into the network twice, so that the same exact h 1,1 is achieved twice in the TN, as marked by the red dotted line in FIG6. This way, every copy of h 1,1 4 In this figure, the initial condition for each layer l ∈ L, h l,0, is chosen such that a vector of ones will be present in the initial element-wise multiplication: goes to the appropriate segment of the calculation, and indeed the TN in FIG6 holds the correct value of h 2,2: DISPLAYFORM3 DISPLAYFORM4 The extension to deeper layers leads us to a fractal structure of the TNs, involving many self similarities, as in the L = 3 example given in FIG6. The duplication of intermediate hidden states, marked in red and blue in this example, is the source of the apparent complexity of this L = 3 RAC TN. Generalizing the above L = 1, 2, 3 examples, a TN representing an RAC of general depth L and of T time-steps, would involve in its structure T duplications of TNs representing RACs of depth L − 1, each of which has a distinct length in time-steps i, where i ∈ [T]. This fractal structure leads to an increasing with depth complexity of the TN representing the depth L RAC computation, which we show in the next subsection to motivate the combinatorial lower bound on the Start-End separation rank of deep RACs, given in conjecture 1. The above presented construction of TNs which correspond to deep RACs, allows us to further investigate the effect of network depth on its ability to model long-term temporal dependencies. We present below a formal motivation for the lower bound on the Start-End separation rank of deep recurrent networks, given in conjecture 1. Though our analysis employs TNs visualizations, it is formal nonetheless -these graphs represent the computation in a well-defined manner (see sections A.1-A.3).Our conjecture relies on the fact that it is sufficient to find a specific instance of the network parameters Θ×h DISPLAYFORM0 ) S,E achieves a certain rank, in order for this rank to be a lower bound on the StartEnd separation rank of the network. This follows from combining claim 2 and lemma 1. Claim 2 assures us that the Start-End separation rank of the function realized by an RAC of any depth L, is lower bounded by the rank of the matrix obtained by the corresponding grid tensor matricization: DISPLAYFORM1  for all of the values of parameters Θ × h 0,l but a set of Lebesgue measure zero, in order to establish the lower bound in conjecture 1. Next, we rely on lemma 1, which states that since the entries of A(y T,L,Θ c) are polynomials in the deep recurrent network's weights, it suffices to find a single example for which the rank of the matricized grid tensor is greater than the desired lower bound. Finding such an example would indeed imply that for almost all of the values of the network parameters, the desired inequality holds. In the following, we choose a weight assignment that effectively'separates' between the first layer and higher layers, in the sense that W I,2 is of rank-1. This is done in similar spirit to the assignment used in the proof of theorem 1, in which W I,2 ij ≡ δi1 (see section B.3). Under this simplifying assignment, which suffices for our purposes according to the above discussion, the entire computation performed in deeper layers contributes only a constant factor to the matricized grid tensor. In this case, the example of the TN corresponding to an RAC of depth L = 3 after T = 6 time-steps, which is shown in full in FIG3, takes the form shown in the upper half of FIG9. Next, in order to evaluate rank A(y T,L,Θ c) S,E, we note that graph segments which involve only indices from the "Start" set, will not affect the rank of the matrix under mild conditions on W I,1, W H,1. Specifically, under the Start-End matricization these segments will amount to a different constant multiplying each row of the matrix. For the example of the RAC of depth L = 3 after T = 6 time-steps, this amounts to the effective TN given in the bottom left side of FIG9. Finally, the dependence of this TN on the indices of timesteps {T /2 + 2, . . ., T}, namely those outside of the basic unit involving indices of time-steps {1, . . ., T /2 + 1}, may only increase the ing Start-End matricization rank. 6 Thus, we are left with an effective TN resembling the one shown in section 4.2, where the basic unit separating "Start" and "End" indices is raised to the power of the number of its repetitions in the graph. In the following, we prove a claim according to which the number of repetitions of this basic unit in the TN graph increases exponentially with the depth of the RAC:Claim 4. Let φ(T, L, R) be the TN representing the computation performed after T time-steps by an RAC with L layers and R hidden channels per layer. Then, the number of occurrences in layer L = 1 of the basic unit connecting "Start" and "End" indices (bottom right in FIG9), is exactly DISPLAYFORM0 be the function computing the output after T time-steps of an RAC with L layers, R hidden channels per layer and weights denoted by Θ. In order to focus on repetitions in layer L = 1, we assign W I,2 ij ≡ δi1 for which the following upholds 7:A(y DISPLAYFORM1, where the constant term in the first line is the contribution of the deeper layers under this assignment, and the tensor V d 1 ...d T/2, which becomes a vector under the Start-End matricization, reflects the contribution of the "Start" set indices. Observing the argument of the chain of products in the above expression, DISPLAYFORM2 r j r j+1, it is an order t2 tensor, exactly given by the TN representing the computation of a depth L = 1 RAC after t2 time-steps. Specifically, for t2 = T /2 + 1, it is exactly equal to the basic TN unit connecting "Start" and "End" indices, and for T /2 + 1 < t2 ≤ T it contains this basic unit. This means that in order to obtain the number of repetition of this basic unit in φ, we must count the number of 5 For example, this holds if W I,1 is fully ranked and does not have vanishing elements, and W H,1 = I. 6 This is not true for any TN of this shape but holds due to the temporal invariance of the recurrent network's weights.7 See a similar and more detailed derivation in section B.3.multiplications implemented by the chain of products in the above expression. Indeed this number is equal to: DISPLAYFORM3 Finally, the form of the lower bound presented in conjecture 1 is obtained by considering a rank R matrix, such as the one obtained by the Start-End matricization of the TN basic unit discussed above, raised to the Hadamard,..., x (M) ∈ X, we define the DISPLAYFORM4, for which it holds that: Otherwise, assume that DISPLAYFORM5 DISPLAYFORM6 be the functions of the respective decomposition to a sum of separable functions, i.e. that the following holds: DISPLAYFORM7 Then, by definition of the grid tensor, for any template vectors x,..., x (M) ∈ X the following equality holds: are column and row vectors, respectively, which we denote by vν and u T ν. It follows that the matricization of the grid tensor is given by: DISPLAYFORM8 DISPLAYFORM9 In this sub-section, we follow the proof strategy that is outlined in section 4, and prove theorem 1, which shows an exponential advantage of deep recurrent networks over shallow ones in the ability to model long-term dependencies, as measured by the Start-End separation rank (see section 3.1). In sections B.3.1 and B.3.2, we prove the bounds on the Start-End separation rank of the shallow and deep RACs, respectively, while more technical lemmas which are employed during the proof are relegated to section B.3.3. We consider the Tensor Network construction of the calculation carried out by a shallow RAC, given in FIG5. According to the presented construction, the shallow RAC weights tensor (eqs. 6 and 7) is represented by a Matrix Product State (MPS) Tensor Network (Orús, 2014), with the following order-3 tensor building block: DISPLAYFORM0, where dt ∈ [M] is the input index and kt−1, kt ∈ [R] are the internal indices (see FIG5). In TN terms, this means that the bond dimension of this MPS is equal to R. We apply the of , who state that the rank of the matrix obtained by matricizing any tensor according to a partition (S, E) is equal to a min-cut separating S from E in the Tensor Network graph representing this tensor, for all of the values of the TN parameters but a set of Lebesgue measure zero. In this MPS Tensor Network, the minimal cut w.r.t. the partition (S, E) is equal to the bond dimension R, unless R > M T/2, in which case the minimal cut contains the external legs instead. Thus, in the TN representing A For a deep network, claim 2 assures us that the Start-End separation rank of the function realized by a depth L = 2 RAC is lower bounded by the rank of the matrix obtained by the corresponding grid tensor matricization, for any choice of template vectors. Specifically: DISPLAYFORM0 Thus, since it trivially holds that rank A(y DISPLAYFORM1 T/2 (the rank is smaller than the dimension of the matrix), proving that rank A(y DISPLAYFORM2 for all of the values of parameters Θ×h 0,l but a set of Lebesgue measure zero, would satisfy the theorem. In the following, we provide an assignment of weight matrices and initial hidden states for which rank A(y DISPLAYFORM3 . In accordance with claim 5, this will suffice as such an assignment implies this rank is achieved for all configurations of the recurrent network weights but a set of Lebesgue measure zero. We begin by choosing a specific set of template vectors x,..., x (M) ∈ X. Let F ∈ R M ×M be a matrix with entries defined by Fij ≡ fj(x (i) ). According to , since {f d} M d=1 are linearly independent, then there is a choice of template vectors for which F is non-singular. Next, we describe our assignment. In the expressions below we use the notation δij = 1 i = j 0 i = j. Let z ∈ R \ {0} be an arbitrary non-zero real number, let Ω ∈ R+ be an arbitrary positive real number, and let Z ∈ R R×M be a matrix with entries Zij ≡ z DISPLAYFORM4 We set W I,1 ≡ Z · (F T) −1 and set W I,2 such that its entries are W I,2 DISPLAYFORM5 to the identity matrix, and additionally we set the entries of DISPLAYFORM6 Finally, we choose the initial hidden state values so they bear no effect on the calculation, namely h 0,l = W H,l −1 1 = 1 for l = 1, 2.Under the above assignment, the output for the corresponding class c after T time-steps is equal to: DISPLAYFORM7 When evaluating the grid tensor for our chosen set of template vectors, i.e. A(y DISPLAYFORM8, we can substitute fj(x (i) ) ≡ Fij, and thus DISPLAYFORM9 Since we defined Z such that for r ≥ min{R, M} Z rd = 0, and denotingR ≡ min{R, M} for brevity of notation, the grid tensor takes the following form: DISPLAYFORM10 where we split the product into two expressions, the left part that contains only the indices in the start set S, i.e. d1,..., dT /2, and the right part which contains all external indices (in the start set S and the end set E). Thus, under matricization w.r.t. the Start-End partition, the left part is mapped to a vector a ≡ T/2 t=1 R r=1 t j=1 Z rd j S,E containing only non-zero entries per the definition of Z, and the right part is mapped to a matrix B ≡ DISPLAYFORM11, where each entry of u multiplies the corresponding row of B. This in: DISPLAYFORM12 Since a contains only non-zero entries, diag(a) is of full rank, and so rank A(y DISPLAYFORM13 . For brevity of notation, we define N ≡ DISPLAYFORM14 To prove the above, it is sufficient to show that B can be written as a sum of N rank-1 matrices, i.e. B = DISPLAYFORM15 are two sets of linearly independent vectors. Indeed, applying claim 6 on the entries of B, specified w.r.t. the row (d1, . . ., dT /2) and column (dT /2+1, . . ., dT), yields the following form: DISPLAYFORM16 where for all k, p (k) isR-dimensional vector of non-negative integer numbers which sum to k, and we explicitly define states R, T /2 and trajectory p (T/2) in claim 6, providing a softer more intuitive definition ) is the accumulated reward of the optimal strategy of emptying the bucket. In lemma 3 we prove that there exists a value of Ω such that for every sequence of colors d, i.e. a row ofV, the maximal reward over all possible initial states is solely attained at the state q for all values of z but a finite set, we know there exists a value of z for which rank (B) = N, and the theorem follows. In this section we prove a series of useful technical lemmas, that we have employed in our proof for the case of deep RACs, as described in section B.3.2. We begin by quoting a claim regarding the prevalence of the maximal matrix rank for matrices whose entries are polynomial functions: Claim 5. Let M, N, K ∈ N, 1 ≤ r ≤ min{M, N} and a polynomial mapping A: R K → R M ×N, i.e. for every i ∈ [M] and j ∈ [N] it holds that Aij: R K → R is a polynomial function. If there exists a point x ∈ R K s.t. rank(A(x)) ≥ r, then the set {x ∈ R K : rank(A(x)) < r} has zero measure (w.r.t. the Lebesgue measure over R K). Claim 5 implies that it suffices to show a specific assignment of the recurrent network weights for which the corresponding grid tensor matricization achieves a certain rank, in order to show this is a lower bound on its rank for all configurations of the network weights but a set of Lebesgue measure zero. Essentially, this means that it is enough to provide a specific assignment that achieves the required bound in theorem 1 in order to prove the theorem. Next, we show that for a matrix with entries that are polynomials in x, if a single contributor to the determinant has the highest degree of x, then the matrix is fully ranked for all values of x but a finite set: Lemma 1. Let A ∈ R N ×N be a matrix whose entries are polynomials in x ∈ R. In this case, its determinant may be written as det(A) = σ∈S N sgn(σ)pσ(x), where SN is the symmetric group on N elements and pσ(x) are polynomials defined by pσ(x) ≡ N i=1 A iσ(i) (x), ∀σ ∈ Sn. Additionally, let there existσ such that deg(pσ(x)) > deg(pσ(x)) ∀σ =σ. Then, for all values of x but a finite set, A is fully ranked. Proof. We show that in this case det(A), which is a polynomial in x by its definition, is not the zero polynomial. Accordingly, det(A) = 0 for all values of x but a finite set. Denoting t ≡ deg(pσ(x)), since t > deg(pσ(x)) ∀σ =σ, a monomial of the form c · x t, c ∈ R \ {0} exists in pσ(x) and doesn't exist in any pσ(x), σ =σ. This implies that det(A) is not the zero polynomial, since its leading term has a non-vanishing coefficient sgn(σ) · c = 0, and the lemma follows from the basic identity: det(A) = 0 ⇐⇒ A is fully ranked. The above lemma assisted us in confirming that the assignment provided for the recurrent network weights indeed achieves the required grid tensor matricization rank of R T /2. The following lemma, establishes a useful relation we refer to as the vector rearrangement inequality: DISPLAYFORM0 Proof. We rely on theorem 368 in , which implies that for a set of non-negative numbers {a,..., a (N) } the following holds for all σ ∈ SN: DISPLAYFORM1 with equality obtained only for σ which upholds σ(i) = j ⇐⇒ a (i) = a (j). The above relation, referred to as the rearrangement inequality, holds separately for each component j ∈ [R] of the given vectors: DISPLAYFORM2 We now prove that for all σ ∈ SN such that σ = IN, ∃ĵ ∈ [R] for which the above inequality is hard, i.e.: DISPLAYFORM3 By contradiction, assume that ∃σ = IN for which ∀j ∈ [R]: DISPLAYFORM4 From the conditions of achieving equality in the rearrangement inequality defined in eq. 14, it holds that ∀j ∈ being a set of N different vectors in RR. Finally, the hard inequality of the lemma for σ = IN is implied from eq. 15: DISPLAYFORM5 The vector rearrangement inequality in lemma 2, helped us ensure that our matrix of interest denotedŪ upholds the conditions of lemma 1 and is thus fully ranked. Below, we show an identity that allowed us to make combinatoric sense of a convoluted expression:which is strictly less than the respective contribution in ρ *.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJ3d2Ax0-
We propose a measure of long-term memory and prove that deep recurrent networks are much better fit to model long-term temporal dependencies than shallow ones.
Holistically exploring the perceptual and neural representations underlying animal communication has traditionally been very difficult because of the complexity of the underlying signal. We present here a novel set of techniques to project entire communicative repertoires into low dimensional spaces that can be systematically sampled from, exploring the relationship between perceptual representations, neural representations, and the latent representational spaces learned by machine learning algorithms. We showcase this method in one ongoing experiment studying sequential and temporal maintenance of context in songbird neural and perceptual representations of syllables. We further discuss how studying the neural mechanisms underlying the maintenance of the long-range information content present in birdsong can inform and be informed by machine sequence modeling. Systems neuroscience has a long history of decomposing the features of complex signals under 13 the assumption that they can be untangled and explored systematically, part-by-part. For example on a rich understanding of the phonological, semantic, and syntactic features of speech and language. In contrast, the communicative spaces of many model organisms in auditory neuroscience are more 23 poorly understood, leading to a very small number of model organisms having the necessary tools for 24 study. In birdsong, for example, biophysical models of song production that have been developed 25 for zebra finches do not capture the dynamics of the dual-syrinx vocal tract of European starlings. More species general approaches to modeling communication would increase the accessibility of 27 more diverse and more systematic explorations of animal communication systems in neuroscience. Here, we propose a method based upon recent advances in generative modeling to explore and We show that this method is successful in species as diverse as songbirds, primates, insects, cetaceans, and amphibians, and in recording conditions both in the lab and in the field. We demonstrate this basal ganglia and frontal cortex analogous structures actively maintain temporal information, and songbird temporal structure exhibits long-range temporal dependencies that parallel those seen in Figure 2: Outline of the context-dependent perception task. Birds are tasked with classifying a smooth morph between syllables generated from a VAE, generating a psychometric function of classification behavior. Sequential-contextual cues that precede the classified syllables are given to bias the psychometric function. In the present experiment we explore how sequential context is maintained in the songbird brain. To this end, we train a songbird to classify regions of a VAE-generated latent space of song, and 69 manipulate the perception of those regions of space based upon sequential-contextual information 70 (Fig 2). Specifically, we interpolate between syllables of European starling song projected into latent 71 space. We train a starling to classify the left and right halves of the interpolation using an operant-72 conditioning apparatus (Fig. 4). We then provide a contextual syllable preceding the classified 73 syllable that holds predictive information over the classified syllable (Fig 2 bottom). We hypothesize 74 that the perception of the boundary between the classified stimuli shifts as a function of the context 75 cue. We model this hypothesis using Bayes rule: prior When a stimulus varies upon a single dimension x (the interpolation), the perceived value of x is 77 a function of the true value of x and contextual information (Fig. 3 left). The initial behavioral 78 of our experiment confirmed our hypotheses (Fig. 3). We additionally performed acute
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1gKmmKULB
We compare perceptual, neural, and modeled representations of animal communication using machine learning, behavior, and physiology.
The information bottleneck principle suggests that SGD-based training of deep neural networks in optimally compressed hidden layers, from an information theoretic perspective. However, this claim was established on toy data. The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model. We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for classification and autoencoding. We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder. Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget. Conditionally generated images are shown in (b) -(f). Ten epochs is the peak of fitting, and 200 epochs is the end of compression. These samples enable an intuitive illustration of compression in hidden layers. Based on this example it seems that a compressed representation (f) in varied samples because it compresses class-irrelevant information. Compare the beginning (d) to the end (f) of compression: there is greater variety at the end without losing the essence of'horse'.Deep neural networks are ubiquitous in machine learning and computer vision. Unfortunately, the popularity of neural networks for applications is unmatched by an agreed upon and clear understanding of how exactly they work, and why they generalise well. The field of deep learning will advance with more comprehensive theory and empirical studies that better characterise neural networks. Generalisation, in the context of learning, means the extraction of typical abstract properties of training data that can be successfully used for inference on unseen data. Neural networks generalise well even when they are over-parameterised. highlighted the need to rethink generalisation, because conventional wisdom is not readily applicable to neural networks. A number of avenues for new generalisation bounds for neural networks BID2 BID6 BID12 exemplify how inapplicable conventional methods for understanding model generalisation can be. One approach to better understanding neural networks is the Information Bottleneck (IB, Section 2) interpretation of deep learning (; ;). The IB accredits deep learning success to compression in hidden layers via the noise of parameter updates in stochastic gradient descent (SGD). Information compression in optimal representations that discard task-irrelevant data while keeping task-relevant data. The IB principle has since been actively debated , partially motivating this work. The novelty is that we apply information theoretic analyses to modern convolutional residual neural networks (ResNets, Section 5) trained on realistic images (Section 5.2). These choices complicate analysis since information quantities are non-trivial to compute for high-dimensions. Our solution is to define a lower bound on the mutual information (MI, Section 4) and to estimate it by training decoder models (Section 4.1). The decoder model for the MI between images and hidden layers is a conditional PixelCNN++ and samples generated therefrom illustrate visually MI (FIG0). • An experimental framework for tracking the MI in a realistic setting. Tracking both the forward and inverse MI in a ResNet using realistic images. Earlier research tracked these quantities for constrained toy-problems or low-dimensional data. Lifting these constraints requires defining models to compute a lower bound on the MI.• Analysis of PixelCNN++ samples conditioned on hidden layer activations to illustrate the type of information that ResNet classifiers learn to compress. This is done via the visual demonstration of the sorts of invariances that a ResNet learns. This paper compliments earlier work on the IB interpretation of deep learning, which is described in the next section. The key difference is that we analyse a modern network trained on realistic images. The IB interpretation of learning suggests that an optimal representation exists between input data, x, and target data, y, that captures all relevant components in x about y. An optimal representation, h, should retain only the information relevant for the task described by y. The IB interpretation posits that the hidden layers in neural networks learn hidden layer configurations that maximally compress characteristics in the input data that are irrelevant to the target task. In classification, for example: the nature of the ground and/or in an image of a horse may not matter and could be discarded, provided the horse remains (see FIG0). We interpret the activations of hidden layers as random variables so that we can compute and track the mutual information (MI) between these layers and data. MI is defined as: DISPLAYFORM0 which is the Kullback-Leibler (KL) divergence between the joint distribution of two random variables and the product of their marginals. applied the IB principle to deep learning. By studying what they called the information plane -how I(x; h) and I(y; h) changed over time (h is a hidden layer, x is input data, and y is the target) -they showed that neural networks have two learning phases: 1. Fitting, or empirical error minimisation, where the information shared between hidden representations and input data was maximised. 2. Compression, where the the information shared between hidden representation and input data was minimised but constrained to the classification task at hand. We refer the reader to Sections 2.1 to 2.3 in for more detail on the information plane analysis. Generalisation via compression was put forward as the reason deep learning is successful. From the IB perspective, an advantage of depth is computational in that it shortens compression time. Applying the IB principle to deep learning goes some way to give a theoretical explanation of why neural networks generalise well. However, empirical research to determine its true relevance and applicability is paramount -we contribute here by analysing a modern network using realistic data to see if the principle of compression for generalisation holds in this case. What we seek to show in this paper is that modern convolutional networks do evidence information compression during training. We use the ReLU family of non-linearities and instead of binning to compute MI, we use decoding models. Therefore, our experiments aim to contextualise further the IB principle and its applicability to deep learning theory. BID8 queried whether compression is necessary for generalisation by constructing an invertible convolutional neural network (CNN). They posited that a reversible network is an example that refutes the IB principle because information is never discarded. They also provided an accompanying counter explanation: where depth is responsible for progressively separating data (in the inter-class sense) and contracting data (in the intra-class sense). Although intentionally reversible networks BID8 do not discard irrelevant components of the input space, we postulate that these instead progressively separate the irrelevant components from the relevant, allowing the final classification mapping to discard this information. Inverting Supervised Representations Concurrent to the work in this paper, BID11 trained conditional PixelCNN++ models to'invert' representations learned by a CNN classifier. Using the MNIST BID10 CIFAR-10 image datasets, they showed how a PixelCNN++ can be used as a tool to analyse the invariances and behaviour of hidden layers. The MI was tracked using a PixelCNN++ model, as here. However, the model tested was much smaller than the ResNet we inspect here; we test both classification and autoencoding, whereas that work only considered classification; we provide a finer granularity of assessment, including at model initialisation; and the conditionally generated samples we provide illustrate greater variation owing to the deeper ResNet. In the next section we present and discuss a lower bound on the MI. Earlier work applied various binning strategies to toy data. Binning is not applicable to the modern networks we study here because images as input have many more dimensions than toy datasets. We derive a lower bound, similar to BID1, on the MI between two random vectors as follows: DISPLAYFORM0 where x is the input image data, h is a hidden representation (the activations of a hidden layer), p D is the true data distribution, and q is an auxiliary distribution introduced for the lower bound. We need DISPLAYFORM1, since the entropy of the data is constant. The lower bound follows since the KL-divergence is positive. We can replace x with y for the analogous quantity w.r.t. the target data. With sufficiently large data (of size N) we can estimate: DISPLAYFORM2 where x (i) and h (i) are images and the activations of a hidden layer, respectively. The task now becomes defining the decoder models q(x | h) and q(y | h) to estimate the MI. MI is difficult to compute unless the problem and representations are heavily constrained or designed to make it possible. We do not enforce such constraints, but instead define decoder models that can estimate a lower bound on the MI.Forward direction: computing I(y; h) The decoder model for q(y | h) is constructed as a classifier model that takes as input a hidden layer's activations. To simplify matters, we define this decoder model to be identical to the encoder architecture under scrutiny (Section 5). To compute the MI between any chosen hidden representation, h j (where j defines the layer index), we freeze all weights prior to this layer, reinitialise the weights thereafter, and train these remaining weights as before (Section 5.1).Inverse direction: computing I(x; h) The input images -x ∈ R M ×M ×3, where M = 32 is the image width/height -are high-dimensional. This makes estimating I(x; h) more complicated than I(y; h). To do so, we use a PixelCNN++ model : a state-of-the-art autoregressive explicit density estimator that allows access to the model log-likelihood (a critical advantage over implicit density estimators). See Appendix B for more details. A note on the quality of the lower bound The tightness of the lower bound is directly influenced by the quality of the model used to estimate it. We take a pragmatic stance on what sort of error to expect: using a PixelCNN++ to decode I(x; h) essentially estimates the level of usable information, in as much as it can recover the input images. A similar argument can be made for the forward direction, but there is no direct way of measuring the tightness of the bound. Even though empirical research such as this could benefit from an ablation study, for instance, we leave that to future work. The architecture used to encode hidden layers was taken directly from the original ResNet BID7 classifier architecture for CIFAR-10. This is either trained for classification or autoencoding. Further, this architecture is not-altered when using it to do forward decoding (Section 4.1).We define four hidden layers for which we track the MI: h 1, h 2, h 3, and h 4. We sample from these (Equation 3) as the activations: at the end of the three hyper-layers of the ResNet (h 1, h 2, and h 3); and h 4 after a 4 × 4 average pooling of h 3 (see FIG10 in Appendix A). h 4 is the penultimate layer and is therefore expected to be most compressed. None of these layers have skip connections over them. Regarding the autoencoder's decoder, this is created by simply inverting the architecture using upsampling. The autoencoder's bottleneck is h 4.The hyper-parameters for the PixelCNN++ decoder models were set according to the original paper. Regarding conditioning on h: this is accomplished by either up-or down-sampling h to fit all necessary layers (Appendix B.1 expands on this). Both the classifier and autoencoder weights were optimised using SGD with a learning rate of 0.1 and cosine annealing to zero over 200 epochs, a momentum factor of 0.9 and a L2 weight decay factor of 0.0005. We used the leaky ReLU non-linearity. Cross-entropy loss was used for the classifier, while mean-squared-error (MSE) loss was used for the autoencoder. Our implementation was written in PyTorch . For clarity, Algorithm 1 in Appendix C gives a breakdown of the experimental procedure we followed. The analysis in this paper requires computing MI using decoder models, presenting a challenge in that this is a data-hungry process. We need: enough data to train the models under scrutiny; enough data to train the decoder models; and enough data for the actual MI estimate (Equation 3). Moreover, the above requirements need separate data drawn from the same distribution to avoid data contamination and overfitting, particularly for PixelCNN++. Hence, we require a three-way split:1. Encoding, for training the autoencoder and classifer; 2. Decoding, for training the models under which MI is computed; and 3. Evaluation, to provide unbiased held-out estimates of I(x; h) and I(y; h).Since CIFAR-10 (BID9) is too small and Imagenet BID4 ) is too difficult, we used a recently compiled dataset called CINIC-10: CINIC-10 is Not Imagenet or CIFAR-10 BID3. It was compiled by combining (downsampled) images from the Imagenet database with CIFAR-10. It consists of 270,000 images split into 3 equal subsets, which we use as the encoding, decoding, and evaluation sets. In the next section we discuss observations from tracking MI.6 OBSERVATIONS: IB PRINCIPLE FOR A made a number of claims regarding deep learning. We make observations in this section and connect them to the IB interpretation. In Section 6.1 we show and discuss a series of figures that shows that both fitting and compression occur in a ResNet. In Section 6.2 we illustrate the quality of information kept and discarded by analysis of conditionally generated images from the PixelCNN++ model. FIG2 gives the information curves expressed in the same fashion as in earlier work ; FIG7 tracks the MI for classifier and autoencoder training. Appendix E gives some training curves for the PixelCNN++ decoding models to show their convergence behaviour and clarify the associated computational burden of this work. Classification For classification I(y; h j) always increases and greater changes in MI can be seen for later layers. The convergence point (200 epochs) is the point at which I(y; h 1) ≈ I(y; h 2) ≈ I(y; h 3), where I(y; h j) is maximised in all layers subject to model constraints. The convergence of all layers to a similar information content shows that neural networks are good at passing target information forward. The lighter crosses in FIG7 (a) are from linear probes BID0 to show that all layers become more linearly separable while maximising I(y; h j).A fitting stage is clear for all measured layers, where I(h; y) first increased. This stage was not as short as initially suggested as it took between 5 and 10 epochs. This indicates that the initial fitting phase of learning may have a larger influence on the solution the network finds. The initial representation learning can be seen as learning a good representation that enables compression. For convolutional ResNets, this process is non-trivial. We observed compression of information in hidden layers for classification, shown in FIG7 by the fact that the MI between input and hidden activations decreases. These observations do not necessarily contradict the findings of , but it does serve to show that compression does indeed occur in this setting. h 4 begins compressing first but also compresses the least (67 nats). The layer immediately preceding the average pooling -h 3 -begins compressing between 5 and 10 epochs and compresses almost twice as much (120 nats). Finally, the earliest layer we tracked -h 2 -compressed from approximately 10 epochs and to a greater degree than other layers (267 nats). Next, we turn our attention to autoencoding since earlier work has focused solely on classification. Autoencoding We observed compression in hidden layers for autoencoding. Moreover, classrelevant information in the bottleneck layer is also compressed (exemplified by I(y; h 3)). This is because for autoencoding target is class-indifferent. This may explain why simple autoencoding often does not perform well as a pretraining regime for classification without explicit target-based fine-tuning BID5 ).Compression during autoencoding is surprising since the target is identical to the input: there is no target-irrelevant information. An explanation could be that the autoencoder learns a representation that is easier for decoding, even at the cost of reducing the MI at the bottleneck layer. In this section we show and discuss conditionally generated samples to illustrate the type of information kept and discarded by a ResNet classifier. Conditional PixelCNN++ Samples were processed for the classifier set-up only, since the samples for the autoencoder were almost entirely indistinguishable from the input. Samples are given in FIG9, conditioned on h 3 and h 4, respectively. Samples conditioned on h 2 are given in Appendix D. Inspecting these samples for classifier training gives an intuitive perspective of what quality of information is compressed by the ResNet, in order to classify well. The capacity of h 4 is 16× smaller than h 3 and the samples evidence this. These serve two purposes: they confirm claims of the IB principle regarding irrelevant infor- hj) lower bound DISPLAYFORM0 E pD (y,x) [log q(y | x hj) lower bound DISPLAYFORM0 (b) Forward MI tracking, autoencoding The classifier always increases MI with the target data (a), while the autoencoder's bottleneck layer compresses label-information. The orange curve in (a) is computed from the classifier's log-likelihood throughout training. Information compression is evidenced in both training regimes, the degree to which is listed as ∆ c in nats on the right of (c) and (d). Increasing linear separability is also shown in (a). For forward decoding of the autoencoder (b), I(y; h 4) = I(y; h 3) since the difference in decoding model is only an average pooling operation, which is applied during encoding for h 4 and decoding for h 3. mation compression; and they give insight into what image invariances a ResNet learns, which could motivate future work in designing models that target these properties. Figure 4: Samples generated using PixelCNN++, conditioned on h 3 in the classifier training set-up. DISPLAYFORM1 DISPLAYFORM2 The original images processed for h 3 are shown in (a). Ten epochs is close to the peak of the fitting stage, while 200 epochs is the end of learning. Unnecessary features (e.g., colour) are preserved at 10 epochs, and the sample diversity is greater at 200 epochs. I(h 3 ; x) is lower at 200 epochs compared to no learning FIG7, but the quality of preserved information is better. What the network keeps and discards Consider that the network compresses information in h 3 such that at its final state there is less information than at initialisation - FIG7. When inspecting the samples of Figure 4 (b) and (f), we see that even though the information content is higher at network initialisation, the sampled images look like poor renditions of their classes. The network learns to keep the useful information. In contrast to this observation we note that not all irrelevant information is discarded. The trees behind the truck in the final row of FIG9 (f) illustrate this. At initialisation the network weights are random. Even with this random network, information is forward propagated to h 4 enough to generate the samples in FIG9 (b). Even though these samples share characteristics (such as colours) with the input, they are not readily recognisable and the shape of the class-relevant components is often lost. Colour specific invariances Foreground and colour partial-invariance is illustrated in both h 3 and h 4 at the end of training. Consider the car and truck samples to see how the foreground colour of these vehicles is not kept by the layers. The horse samples show colour variation. The samples in the deer and truck classes in FIG9 (f) are still clearly within class but deviate significantly from the input image (a).The samples conditioned on h 3 later in training, shown in Figure 4 (e) and (f), are more varied than earlier in training. Class irrelevant information -such as the colours of s (grass, sky, water, etc.) or the specific colour of cars or trucks -is not kept, ing in more varied samples that nonetheless resemble the input images. An unconditional PixelCNN++ was also trained for comparison (see Appendix F for loss curves and unconditionally generated samples). The ResNet architecture enables very deep CNNs. We show that learning representations using a ResNet in information compression in hidden layers. We set out in this research to test some of the claims by regarding the information bottleneck principle applied to deep learning. By defining a lower bound on the MI and'decoder' models to compute the MI during classifier and autoencoder training regimes, we explored the notion of compression for generalisation in the context of realistic images and a modern architecture choice. For both classification and autoencoding we observed two stages of learning, characterised by: an initial and relatively short-lived increase and a longer decrease in MI between hidden layers and input training data. Although we cannot confirm the mechanism responsible for compression (stochastic relaxation, for example), we gave an intuitive glimpse into what quality/type of information is kept and discarded as ResNets learn. PixelCNN++ models were used to estimate the MI between hidden layers (of the models under scrutiny) and input data; images were generated conditioned on hidden layers to illustrate the fitting and compression of data in a visual and intuitive fashion. The experimental procedure we developed for this research enables visualising class invariances throughout training. In particular, we see that when a ResNet is maximally (subject to model constraints) compressing information in its hidden layers, the class-irrelevant features of the input images are discarded: conditionally generated samples vary more while retaining information relevant to classification. This has been shown in theory and for toy examples, but never illustrated to the degree that we do here. The encoder architecture used in this research is shown in FIG10. The original PixelCNN formulation (van den) is autoregressive in the sense that it models an image by decomposing the joint distribution as a product of conditionals: DISPLAYFORM0 where each x m is a pixel in the image and h is included for completeness. PixelCNN estimates each colour channel of each pixel using a 255-way softmax function, while the PixelCNN++ improvement does this by estimating a colour-space defined by a K-way (with K = 10 in the original usage) discretized mixture of logistic sigmoids: where π m k is the k th logistic sigmoid mixture coefficient for pixel i, µ m k and s m k are the corresponding mean and scale of the sigmoid, σ(·). The discretization is accomplished by binning the network's output within ±0.5. DISPLAYFORM1 The colour channels are coupled by a simple factorisation into three components (red, green, and blue). First, the red channel is predicted using Equation 5. Next, the green channel is predicted in the same way, but the means of the mixture components, µ m, are allowed to depend on the value of the red pixel. The blue channel depends on both red and green channels in this way. argued that assuming a latent continuous colour intensity and modelling it with a simple continuous distribution (Equation 5) in more condensed gradient propagation, and a memory efficient predictive distribution for x. Other improvements included down-sampling to capture non-local dependencies and additional skip connections to speed up training. The conditioning variable is added to each gated convolution residual block, of which there are six per each of the five hyper-layers in PixelCNN++.The gated convolution residual block structure was shown empirically to improve . The activations of a gated residual block are split into two equal parts and one half is processed through a sigmoid function to produce a mask of values in the range. This is element-wise multiplied with the other half of the activations as the'gate'.As for the conditioning variable, it is conventionally a one-hot encoded class label vector that is added to the activations of each gated residual block. Considering the layers chosen for scrutiny in this work FIG10 ), most of the conditioning variables are three-dimensional: two spatial dimensions and a channel dimension. Additionally, we must account for the down-sampling used in PixelCNN++. Therefore, there are four possible transformations of h before it can be integrated into the PixelCNN++ model:1. The conditioning variable is larger (regarding spatial width and height) than the activations to which it needs to be added. The conditioning variable is down-sampled using a strided convolution of two and (if necessary) average pooling with a kernel size of two. The filter width is matched in this same convolution.2. The conditioning variable is smaller than the activations. A sub-pixel shuffle convolution (a; b) is used for up-sampling. The sub-pixel shuffle is an alternative to deconvolution or nearest neighbour up-sampling that allows the model to learn the correct up-sampling without unnecessary padding. A non-strided convolution with a kernel of one matches the filter width.3. The conditioning variable is the same size as the activations. A non-strided convolution with a kernel of one matches the filter width.4. The conditioning variable is, instead, a vector -h 4 in FIG10. The dot product of these and the appropriately sized weight matrix are taken to match the activations. If, for example, h = h 2 is a (16 × 16) × 32 (two-dimensional with 32 filters, the second hyper-layer in FIG10) hidden representation, the first three aforementioned transformations would be in effect because the configuration of PixelCNN++ means that there are activations with spatial resolutions of (32 × 32), (16 × 16), and (8 × 8), to which the conditioning variable must be added. D MORE SAMPLES FIG11 shows conditional samples generated by conditioning PixelCNN++ models on h 2 (see FIG10), respectively. h 2 showed the most compression FIG7 ) but the quality of information that was compressed clearly did not influence the structure of the samples. Instead, global hue and F UNCONDITIONAL PIXELCNN++ FIG0 shows the training curves for an unconditional PixelCNN++ trained on the encoder dataset of CINIC-10. Samples generated are shown in FIG0, giving context and scale to the type of samples generated in this work. Figure 10: Unconditional PixelCNN++ loss curves when trained on the encoder dataset of CINIC-10. Since this is only using one third of CINIC-10, it may be possible to achieve a lower loss when using a larger portion of CINIC-10. The best evaluation loss here corresponds to 3.58 bits per dimension, as opposed to the 2.92 bits per dimension on CIFAR-10 .Figure 11: Unconditional PixelCNN++ generated samples when trained on the encoder dataset of CINIC-10. These samples have good local qualities but are not particularly convincing as real images. This is a known pitfall of autoregressive explicit density estimators.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HklbTjRcKX
The Information Bottleneck Principle applied to ResNets, using PixelCNN++ models to decode mutual information and conditionally generate images for information illustration
We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure? This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation. While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier. These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\"{i}ve adaptation as well as learning from scratch. Building on this intuition, we propose risk-averse domain adaptation (RADA). RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics. Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model. We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes. We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning. An experienced human driving a rental car for the first time is initially very aware of her lack of familiarity with the car. How sensitive is it to acceleration and braking? How does it respond to steering? How wide is the vehicle and what is its turning radius? She drives mindfully, at low speeds, braking far ahead of desired stops, and making wide turns, all the while observing the car's responses and adapting to it. Within minutes, once she is familiar with the car, she begins to drive more fluently and efficiently. Humans draw upon their prior experiences to perform this kind of safe, quick adaptation to unfamiliar situations all the time, such as when playing with a new tennis racquet, or walking on a new slippery surface. Such problems are critical to address in autonomous systems: such as when a self-driving car must learn to drive in a new country, or when a planetary rover might have to learn to explore a harsh new environment. Missteps in real-world situations can cause real damage to robots and their environments. An important bottleneck in applying today's standard machine learning approaches to control in these real-world situations is that they are trained without any notion of safe behavior under uncertainty. Recent works have attempted to address this by proposing methods for safe exploration during reinforcement learning -in other words, how might an agent avoid risky actions during training time? This still requires that the robot acquire its notions of uncertainty and risks at the same time as it is learning to perform tasks in the new environment, which is difficult and precarious. Could we instead rely on transferring notions of uncertainty and risk acquired from prior experience in other related domains, such as in simulated environments, where safety may not be as much of a concern? In other words, could we make the safe learning problem easier through knowledge transfer, relaxing the problem to safe adaptation, like the human driver? How might the planetary rover draw on its experience in many varied terrains on Earth to perform meaningfully cautious actions during learning on the unknown terrain of a new planet? Motivated by these questions, we propose a model-based reinforcement learning approach called risk averse domain adaptation (RADA). RADA works by first pretraining a probabilistic dynamics model on a population of training domains with varied, unknown dynamics. Through this experience over many environments, the model learns to estimate the epistemic uncertainty (model uncertainty) of unknown environment dynamics, thus permitting estimation of a distribution of outcomes for any action executed by the agent. When introduced into a new target environment, RADA uses this estimated distribution of outcomes to select cautious actions that obey the following maximin notion of risk-aversion: among various candidate action sequences, it executes those that lead to the best worst-case performance, as predicted by the model. Much like the human driver in the example above, all the information collected during this cautious phase of exploration is fed back into the model to finetune it to the new domain, leading to increasingly confident predictions. Over time, RADA steadily estimates lower risks and approaches optimality in the target environment. As we demonstrate in experiments in a driving domain, the experience acquired during RADA's pretraining phase enables fast yet safe adaptation within only a handful of episodes. Cautious or risk-averse learning has close connections to learning robust control policies, as well as the uncertainty estimation derived from Bayesian reinforcement learning . Rather than conventionally maximizing a reward function, accounting for risk usually involves allocating more attention to'worst-case' outcomes in an environment. Such outcomes become particularly important in out-of-domain settings, where purely optimizing in the training domain does not guarantee good performance in the test domain, the problem setting that we consider in this work. Safety in reinforcement learning. Incorporating safety requires properly managing risks and reducing the impact of unforeseen negative outcomes. Risk management is extensively studied in quantitative finance. In portfolio optimization, a commonly used quantity that measures the expected return considering the worse α-% of cases is Conditional Value at Risk (CVaR) . With probability α, the reward is greater than the CVaR measure. CVaR is formulated as E[R|R ≤ υ α]. Rather than optimizing the expected reward, risk averse policies optimize the lower α-quartile of the distribution of rewards. While meta-learning approaches like RL 2 can potentially learn safety by adapting across learning episodes, we found this was not possible in the environments we tested. To address safety more expicitly, the reinforcement learning community is adopting measures like CVaR as quantities that can be optimized (; ; ;) to create policies which are robust to shifts from source to target domains. propose learning robust policies by sampling from the α-quartile of an ensemble of models. While the model ensemble is trained on a given source distribution, the policy is only trained on the lower α-quartile rewards from trajectories sampled from this ensemble. This leads to policies which are more conservative and therefore more robust to shift when deployed from source to target domains. Epistemic uncertainty in reinforcement learning. While learning a robust model is beneficial for transferring to different domains, model-based reinforcement learning offers an additional unsupervised learning signal that can be exploited at test time. In particular, prior work has shown that a model can be quickly adapted during test time by meta-learning for fast adapting parameters during training (a;). These fast adapting parameters offers greater flexibility in adapting to unforeseen circumstances which an agent may encounter at test time. Nagabandi et al. (2018a) show that real robots can quickly adapt to broken or miscalibrations during evaluation through this fast adaptation acquired through meta-learning. Such approaches are complementary to our approach, as they provide a means to explicitly train for fast adaptation to disturbances in the environment, while they do not account for any notion of safety. propose using the uncertainty of a model to regularize policy learning. The policy is encouraged to create trajectories which are distributionally close to trajectories observed in training data. After training, the observations of the model and actions of the policy generate state trajectories which near the domain it has been trained on. In other words, the policy has a preference to keep the trajectories within its training domain. In our work, our policy is encouraged to behave cautiously in unfamiliar environments rather than remain in familiar ones. train a collision prediction model to favor'safe' (low velocity) collisions. Using uncertainty estimates, this collision model will initially behave cautiously in a new environment. Similar to our method, the model becomes less conservative as it adapts to the new environment and lowers its uncertainty. Domain randomization. Domain randomization (; ;) attempts to train policies that are able to transfer to some target environment by training an agent in deliberately randomized simulated environments to allow learning a policy that is invariant to the randomized parameters, and thus performs robustly in the target environment. RADA also pretrains on a set of environments with varied dynamics, but different from these prior works, we operate in a safety-critical setting, focusing on safe adaptation to the target environment -to accomplish this, we follow an explicitly cautious action policy at adaptation time, different from the policy used in the pretraining environments. Before discussing RADA, we first lay out some preliminaries. We build upon PETS , a recently proposed approach for model-based reinforcement learning. We describe the main features of the PETS framework below: Probabilistic dynamics model. PETS trains an ensemble of probabilistic dynamics models within its environment. Each model in the ensemble is a probabilistic neural network that outputs a distribution over the next state s conditioned on the current state s and action a. The data for training these models comes from trajectories executed by following the same scheme for action selection that will be eventually used at test time. Action selection. This action selection scheme is sampling-based model-predictive control (MPC): an evolutionary search method finds action sequences with the highest predicted reward. The reward of an action sequence in turn is computed by propagating action outcomes autoregressively through the learned probabilistic models. Reward computation. Specifically, starting from a state s 0, for each sampled action sequence A = [a 1, ..., a H], where H is the planning horizon, the dynamics model first predicts a distribution over s 1 after executing a 0. A particle propagation method samples Monte Carlo samples from this distribution. For each sample, the dynamics model then predicts the state distribution for s 2, conditioned on executing a 1, and the process repeats. This recursive particle propagation in a large number N of particles {ŝ after H steps. These N particles represent samples from the distribution of possible states after executing A. Each such particle i ∈ [1, N] is now assigned a predicted reward r i, which is a function of its full state trajectory starting from s 0. Finally, the mean of those predicted rewards is considered the score of the action sequence: We call this the action score. Then, the winning action sequence A * = arg max A R(A) with the highest action score is selected, the first action in that sequence is executed, and the whole process repeats starting from the ing new state s 1. Now we present our approach, Risk-Averse Domain Adaptation (RADA). As motivated in Sec 1, RADA approaches safe learning as an adaptation problem, where an agent may draw upon its experience in a variety of environments to guide cautious behavior in a new safety-critical target environment while minimizing the risk of catastrophic failure. Consider a set of environments, each defined by the value of an unknown domain ID variable z, which controls the dynamics in each domain. RADA assumes a setting where we train on some of these domains, and must then transfer to new domains with unknown, potentially unseen, and even out-of-distribution values of z. As a running example, consider learning to drive cars, where each car is represented by its own value of the domain ID variable z. This domain ID might include a potentially large set of unknown and hard-to-measure properties, which together determine the way that the car drives. We propose a solution, RADA, that builds upon the PETS framework (Sec 3). PETS has been demonstrated to work well across a variety of environments. Further, compared to alternative approaches, this framework has two natural advantages for our risk-averse domain adaptation setting, both of which are critical to our approach: (i) the probabilistic models of PETS can be adapted to capture the "epistemic uncertainty" about the dynamics of a new domain, and (ii) model-based RL agents contain dynamics models that can be trained in the absence of any rewards or supervision, providing a route for adaptation to a new environment. We now discuss how RADA builds upon PETS. RADA first builds a probabilistic ensemble of dynamics models on the training domains that capture the epistemic uncertainty in dynamics due to unknown z. We call this the pretraining phase. When an agent with this pretrained model encounters a new domain, we use pessimistic predictions from the model to drive cautious exploration to finetune the model to the new domain, leading to safe and fast adaptation. We call this the adaptation/finetuning phase. Algorithm 1 provides pseudocode for RADA, and the rest of this section explains RADA in detail. While the PETS probabilistic ensemble is trained to represent uncertainty within a single environment in , we would like our model to capture the uncertainty associated with being dropped into a new environment, with unknown domain ID z. To do this, we propose a "pretraining" phase, where a single PETS ensemble is trained across all the training environments, with varying, unknown values of z. Specifically, at the beginning of each training episode, we randomly sample one of the training z's from a uniform distribution. Since z determines the environment dynamics and is unavailable to the learned dynamics model, the ensemble has incentive to learn to model this as epistemic uncertainty during the pretraining phase. See the pretraining procedure in Algorithm 1. After this pretraining, how might the uncertainty captured in the ensemble inform cautious exploration during adaptation in the target environment? To do this, we adapt the PETS action selection and reward computation scheme using a maximin notion of cautious behavior, in line with notions of risk used in prior work across disciplines . Specifically, we replace the action score of equation 1 with a newly defined "generalized action score" R γ (A), in which the "caution parameter" γ ∈ controls the degree of caution exercised in evaluating action sequences in the new environment. R γ (A) is defined as: where υ k (r) denotes the value of the k th percentile of predicted rewards {r j} N j=1 among the N particles after particle propagation. Unpacking this definition, R γ measures the mean score of the bottom 100 − γ percentile of the predicted outcomes from the PETS model. When γ = 50, for instance, it measures the mean of the worst 50 percentile of predicted rewards. This is a pessimistic evaluation of the prospects of the action sequence A -it only pays attention to the worst performing particles in the distribution. At caution γ = 0, R γ exactly matches the definition of R in equation 1: it measures the mean predicted reward of all the particles produced after particle propagation through the model. In our experiments, we heuristically set γ = 50. Now, we define a "γ-cautious action policy" as one that selects actions based on the generalized action score R γ. In other words, A * γ = arg max A R γ (A). We propose to deploy such γ-cautious action policies at adaptation time. The intuition is straightforward: in an unknown environment, the agent deliberately performs cautious actions to perform safe adaptation. Even though it eventually seeks to achieve high mean performance in the target environment, it does not select actions that achieve the highest expected reward under its model. Instead, it chooses to be conservative, not trusting the model's expectations fully. Algorithm 1 RADA 1: procedure PRETRAINING 2: Initialize the probabilistic ensemble dynamics model f Initialize data D using a random controller in a random training domain for one trial. for domain ID z ∼ training domains do Train the probabilistic ensemble dynamics model f on D for t = 0 to task horizon do for evolutionary search stage=1,2,... do for sampled action sequence A do Run state propagation to produce N particles 10: end for Refine search to find A * = arg max R(A) end for Execute first action of A * Record outcome in D for target domain adaptation episode=1,2,... do for t = 0 to task horizon do for evolutionary search stage=1,2,... do for sampled action sequence A do Run state propagation 25: Evaluate A with generalized score Rγ (A) end for Refine search to find A * = arg max Rγ (A) end for Execute first action of A * Record outcome in D Finetune the probabilistic ensemble model f on D end for end for As it gathers experience in the target environment using the γ-cautious policy, RADA also improves its dynamics model over time to finetune it to the new environment. Since dynamics models do not need any manually specified reward function during training, the ensemble model can continue to be trained in the same way as during the pretraining phase. We propose to stabilize adaptation by keeping the model close to the original model. To do this, we maintain a replay buffer of data from the pretraining episodes, conducted outside the target domain. During adaptation, we compute all model updates on this full dataset, including the replay buffers. We use probabilistic neural network ensembles for the dynamics model , and training proceeds through stochastic gradient descent. As the model improves over time, the distribution of predicted outcomes becomes more and more narrow over time. For a deterministic environment, the model eventually converges to deterministic predictions, so that R γ is the same for all γ. In other words, once the model is welltrained, the γ-cautious action policy is identical to the standard action policy. The adaptation procedure in Algorithm 1 sums up cautious action selection and model finetuning. We now evaluate various ablations of RADA in a driving environment, evaluating the importance of three components of our technique for generalization to unseen out-of-domain environments: (i) pretraining on multiple domains (i.e. 'domain randomization'), (ii) replaying pretraining to stabilize finetuning in the target environment, (iii) γ-cautious action policy (γ > 0) at adaptation time -we heuristically set γ = 50 for our experiments. RADA is the version that includes all three techniques. For all methods, we use a PETS ensemble of 5 fully connected models, with each one having 4 layers. We hypothesize that being cautious would not only allow RADA to adapt quicker by avoiding risk of catastrophic failure, but also that as uncertainty about the environment is resolved during adaptation, a cautious policy will become similar to the optimal policy, leading to higher final reward. Our first baseline is RADA without cautious adaptation in the test environment: RADA\caution. Next we separately ablate out multi-domain pretraining and pretraining replay from this approach to create two more baselines: RADA\{caution,DR} and RADA\{caution,replay}. Through these baselines, we systematically compare the contributions of various components of RADA. We also compare RADA against PETS trained directly in the target environment, train-on-target. As an external baseline, we compare against robust adversarial reinforcement learning (RARL). RARL works by training a model-free RL agent jointly with an adversary that perturbs the actions emitted by the agent. We train the adversary to perturb the motor torques in Duckietown. Finally, we also implemented three meta-learning approaches as baselines: GrBal (a), RL 2 , and MOLe (b). However, in our experiments, all three metalearning approaches experienced training failures in our environment, failing to consistently reach the goal even during the pretraining phase. Consequently, we do not consider these methods in the more detailed reported in this section. Car driving environment. Our driving environment is based on Duckietown , a physically accurate driving environment designed for sim-to-real transfer. The task, illustrated in Fig 1, is to make a right turn around a corner to reach a fixed goal tile. Each tile is fixed to a size of 0.585. We modify the environment such that when the car attempts to take an action that would drive it off of the road tiles, it is held in place. If the car clips the corner, it gets stuck unless the agent has learned to reverse out from the corner and try the turn again. The task rewards faster, more efficient turns, so that there is incentive to go near the corner. At the same time, since there is a big price to pay for it, a cautious agent must avoid hitting the corner at all costs. The agent can observe its current x, y coordinates, current velocity, and steering angle. At each time step, it is given a reward equal to negative Manhattan distance from the goal, with a completion bonus of 100 if it successfully reaches the goal tile. The agent has direct control of the steering angle and velocity of the car at each time step. See figure 1. Each domain is specified by a one-dimensional domain ID, the width of the car, which is unknown to the agent. During pretraining, the car width ranges from 0.050 to 0.099, and is sampled uniformly from this range before each training episode. Having driven these cars of varying widths, we evaluate each method's ability to adapt to driving new cars. We test adaptation to one in-distribution domain: width 0.075, and five out-of-distribution domains: 0.1, 0.125, 0.15, 0.175, and 0.20. We vary the car width because of its direct influence on the optimal trajectory of the car: wider cars must make wider turns to avoid catastrophe. Performance Metrics. We measure the return (sum of rewards over a full episode) and the number of collisions underwent in the target environment. For each method, we report the "average maximum reward" over adaptation time t, which is the average over 10 random seeds of the maximum over t adaptation episodes of the reward obtained in the target environment. Finally, to measure the safety of the adaptation process, we also report the cumulative number of collisions suffered by each method during adaptation, which more directly measures the extent to which different methods avoid catastrophic failures and perform safe adaptation. Results. For all RADA variants, we perform pretraining for 32 episodes: 2 initial rollouts with a random policy and 30 on-policy episodes. RADA\{caution,DR} is pretrained for the same 32 episodes but on a single training domain -the one with car width 0.1, which is the closest to the out-of-domain target car widths (recall the training car widths are 0.050-0.099). RARL trains model-free policies, which typically require more pretraining episodes: we use the default settings and train it for 1000 episodes in the pretraining environments. Fig 2 shows the average maximum reward after 10 adaptation episodes and the average total number of collisions for each method, as a function of the car width. All methods perform worse farther away from the training car widths, but RADA maintains its performance up to car width 0.15, and deteriorates more gracefully, maintaining good performance even up to car width 0.2, over two times the largest car width observed at pretraining time. Not only does RADA achieve the highest rewards, but it also achieves them while being the most safe -it suffers the least number of collisions during adaptation across all these environments. Comparing RADA ablations, cautious action selection during adaptation proves critical to performance, and RADA\caution does much worse than RADA throughout. Domain randomization and pretraining replay have relatively smaller impacts on the performance after ten adaptation episodes, but we will show that they impact training speed and stability. Training directly on the target domain (train-on-target), aside from naturally ing in more collisions, does not in the best performance even in terms of rewards. We believe that this is because pretraining on the training car widths has an additional advantage: it sets up a curriculum for training that makes it easier for the agent to solve the exploration problem of learning how to make the right turn. Finally, RARL policies are relatively robust to changes in car width, but perform uniformly worse than RADA: they yield lower reward and experience more collisions at all car widths. Adaptation speed. We now plot the over adaptation time in each target environment for both the average maximum reward and the running total boundary collisions to show adaptation speed in the target environments. Fig 3 shows the average maximum reward over adaptation time for various methods, and at various target car widths. We evaluate at one in-domain car width (0.075) and at five out-of-domain car widths (0.1, 0.125, 0.15, 0.175, and 0.2). Across all six, RADA yields the highest rewards at all times except for car width 0.1. Fig 4 shows similar plots for the average cumulative total boundary collisions over adaptation time. Once again, RADA outperforms all approaches at most domains, with the least overall collisions at all domains except for car width 0.1. Further, as seen in these cases, RADA\{caution,DR} adapts more slowly than other approaches, demonstrating the value of multi-domain pretraining. Further RADA\{caution,replay} leads to very unstable training, reflected here in the fact that maximum reward over adaptation time does not monotonically increase, unlike the other approaches. Epistemic uncertainty. RADA relies on the fact that a probabilistic dynamics model trained on a population of source domains with varying unknown values of the domain ID z, learns to capture Figure 5: How well does the pretrained dynamics model capture the epistemic uncertainty due to unknown car width? These overhead views present the trajectories predicted by the dynamics model (thin curves in desaturated colors) for a given starting state of the car and a specific action sequence. Overlaid on top of these are ground truth trajectories of the car (thick curves) corresponding to varying car widths (legend in the rightmost panel). See text for details. the epistemic uncertainty associated with z. Fig 5 visualizes how well the dynamics model learns to capture the variation in car behavior due to the unknown car width in our Duckietown setting. For a car in a given state, we evaluate a specific action sequence, and plot the car trajectories predicted by the dynamics model on an overhead view of the environment. These are shown in thin, desaturated curves in the panels in Fig 5. As the panels show, these trajectories originate at the car and spread out as they get farther away, representing the higher uncertainty after more actions have been performed. On top of these trajectories predicted by the model, we overlay thicker curves corresponding to ground truth trajectories of cars of various widths, executing the same action sequence. In each panel of Fig 5, these ground truth trajectories all lie largely within the support of the predicted trajectory distribution, indicating that the model captures the epistemic uncertainty due to varying car widths. A particularly illustrative case is in the left-most panel, where the aggressive action trajectory takes the car very close to the corner of the road. This leads to erratic behavior that causes the car to swerve into the side of the road after clipping the corner in some cases, or proceed successfully towards the goal near the center of the road in other cases. The distribution produced by the model's predicted trajectories are similarly bimodal, correctly capturing these two kinds of behavior. Appendix B Fig 7 shows how these predictions evolve over the course of RADA pretraining. Overall, these visualizations demonstrate that the dynamics model successfully learns to/ represent the epistemic uncertainty due to unknown car width. We have proposed RADA, a new approach to model-based reinforcement learning for safe, quick adaptation of RL agents in new environments with unknown dynamics. RADA relies on two key ideas: transferring knowledge from training in a variety of training environments, and using a maximin notion of risk-aversion during action selection in the target environment. We show in a physically accurate driving environment that RADA performs fast, safe adaptation to learn to drive cars around corners, even when they are up to two times larger than any cars it has driven at pretraining time. In the main paper, we perform experiments where we sample the car width dynamically at the beginning of each pretraining episode, from the full pretraining range (width 0.05 to 0.099), so that there are effectively an infinite number of pretraining environments available to sample from. A more realistic situation might involve having a fixed, small number of pretraining environments, among which we select one at random for each pretraining episode. We now report the of preliminary experiments to investigate the feasibility of RADA in this setting. For these experiments, we sample a small number (2, 5, or 10) of car widths from the original pretraining range and limit pretraining to those environments only. Overall, these indicate that RADA is feasible when the number of pretraining environments is finite and small. In Figure 5, we plotted the trajectory predictions by the fully pretrained dynamics model for a fixed starting state and action sequence. Here, we show how the dynamics model's trajectory predictions improve over pretraining time. To do this, we replot the leftmost panel in Fig 5, but this time using dynamics models from various stages of pretraining. Fig 7 shows the . At first, the predictions are compeletely random and the predicted trajectories occupy the entire state space. They become more accurate throughout the course of training, approaching the trajectories plotted in the leftmost Figure 7: How do the dynamics model's predicted trajectories evolve over pretraining time? These plots demonstrate the predicted trajectories for a fixed action sequence at different stages of RADA pretraining, starting at the first episode of training after the initial random rollouts. image of Fig 5. As the model trains in the pretraining environments, it gradually learns to model the epistemic uncertainty associated with the unknown car width.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkxA5lBFvH
Adaptation of an RL agent in a target environment with unknown dynamics is fast and safe when we transfer prior experience in a variety of environments and then select risk-averse actions during adaptation.
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval. Humans are capable of learning visual concepts by jointly understanding vision and language BID12 BID8 BID15. Consider the example shown in Figure 1 -I. Imagine that someone with no prior knowledge of colors is presented with the images of the red and green cubes, paired with the questions and answers. They can easily identify the difference in objects' visual appearance (in this case, color), and align it to the corresponding words in the questions and answers (Red and Green). Other object attributes (e.g., shape) can be learned in a similar fashion. Starting from there, humans are able to inductively learn the correspondence between visual concepts and word semantics (e.g., spatial relations and referential expressions, Figure 1 -II), and unravel compositional logic from complex questions assisted by the learned visual concepts (Figure 1 -III, also see BID0).Motivated by this, we propose the neuro-symbolic concept learner (NS-CL), which jointly learns visual perception, words, and semantic language parsing from images and question-answer pairs. NS-CL has three modules: a neural-based perception module that extracts object-level representations from the scene, a visually-grounded semantic parser for translating questions into executable programs, and a symbolic program executor that reads out the perceptual representation of objects, classifies their attributes/relations, and executes the program to obtain an answer. Figure 1: Humans learn visual concepts, words, and semantic parsing jointly and incrementally. I. Learning visual concepts (red vs. green) starts from looking at simple scenes, reading simple questions, and reasoning over contrastive examples BID12. II. Afterwards, we can interpret referential expressions based on the learned object-based concepts, and learn relational concepts (e.g., on the right of, the same material as). III Finally, we can interpret complex questions from visual cues by exploiting the compositional structure. NS-CL learns from natural supervision (i.e., images and QA pairs), requiring no annotations on images or semantic programs for sentences. Instead, analogical to human concept learning, it learns via curriculum learning. NS-CL starts by learning representations/concepts of individual objects from short questions (e.g., What's the color of the cylinder?) on simple scenes (≤3 objects). By doing so, it learns object-based concepts such as colors and shapes. NS-CL then learns relational concepts by leveraging these object-based concepts to interpret object referrals (e.g., Is there a box right of a cylinder?). The model iteratively adapts to more complex scenes and highly compositional questions. NS-CL's modularized design enables interpretable, robust, and accurate visual reasoning: it achieves state-of-the-art performance on the CLEVR dataset (a). More importantly, it naturally learns disentangled visual and language concepts, enabling combinatorial generalization w.r.t. both visual scenes and semantic programs. In particular, we demonstrate four forms of generalization. First, NS-CL generalizes to scenes with more objects and longer semantic programs than those in the training set. Second, it generalizes to new visual attribute compositions, as demonstrated on the CLEVR-CoGenT (a) dataset. Third, it enables fast adaptation to novel visual concepts, such as learning a new color. Finally, the learned visual concepts transfer to new tasks, such as image-caption retrieval, without any extra fine-tuning. Our model is related to research on joint learning of vision and natural language. In particular, there are many papers that learn visual concepts from descriptive languages, such as image-captioning or visually-grounded question-answer pairs (; ; ; ; BID14, dense language descriptions for scenes , video-captioning BID10 and video-text alignment .Visual question answering (VQA) stands out as it requires understanding both visual content and language. The state-of-the-art approaches usually use neural attentions (; BID6 ;). Beyond question answering, Johnson et al. (2017a) proposed the CLEVR (VQA) dataset to diagnose reasoning models. CLEVR contains synthetic visual scenes and questions generated from latent programs. Table 1 compares our model with state-of-the-art visual reasoning models BID2; ) along four directions: visual features, semantics, inference, and the requirement of extra labels. explored an interpretable, object-based visual representation for visual reasoning. It performs well, but requires fully-annotated scenes during training. Our model also adopts an object-based visual representation, but the representation is learned only based on natural supervision (questions and answers). BID1 Figure 2: We propose to use neural symbolic reasoning as a bridge to jointly learn visual concepts, words, and semantic parsing of sentences.networks and answers the questions by question-conditioned attention over the object features. In contrast, NS-CL parses question inputs into programs and executes them on object features to get the answer. This makes the reasoning process interpretable and supports combinatorial generalization over quantities (e.g., counting objects). Our model also learns general visual concepts and their association with symbolic representations of language. These learned concepts can then be explicitly interpreted and deployed in other vision-language applications such as image caption retrieval. There are two types of approaches in semantic sentence parsing for visual reasoning: implicit programs as conditioned neural operations (e.g., conditioned convolution and dual attention) and explicit programs as sequences of symbolic tokens BID2 b; ). As a representative, BID2 build modular and structured neural architectures based on programs for answering questions. Explicit programs gain better interpretability, but usually require extra supervision such as groundtruth program annotations for training. This restricts their application. We propose to use visual grounding as distant supervision to parse questions in natural languages into explicit programs, with zero program annotations. Given the semantic parsing of questions into programs, proposed a purely symbolic executor for the inference of the answer in the logic space. Compared with theirs, we propose a quasi-symbolic executor for VQA.Our work is also related to learning interpretable and disentangled representations for visual scenes using neural networks. proposed convolutional inverse graphics networks for learning and inferring pose of faces, while learned disentangled representation of pose of chairs from images. proposed the neural scene de-rendering framework as an inverse process of any rendering process.; learned disentangled representations using deep generative models. In contrast, we propose an alternative representation learning approach through joint reasoning with language. We present our neuro-symbolic concept learner, which uses a symbolic reasoning process to bridge the learning of visual concepts, words, and semantic parsing of sentences without explicit annotations We treat attributes such as Shape and Color as neural operators. The operators map object representations into a visual-semantic space. We use similarity-based metric to classify objects.for any of them. We first use a visual perception module to construct an object-based representation for a scene, and run a semantic parsing module to translate a question into an executable program. We then apply a quasi-symbolic program executor to infer the answer based on the scene representation. We use paired images, questions, and answers to jointly train the visual and language modules. Shown in Figure 2, given an input image, the visual perception module detects objects in the scene and extracts a deep, latent representation for each of them. The semantic parsing module translates an input question in natural language into an executable program given a domain specific language (DSL). The generated programs have a hierarchical structure of symbolic, functional modules, each fulfilling a specific operation over the scene representation. The explicit program semantics enjoys compositionality, interpretability, and generalizability. The program executor executes the program upon the derived scene representation and answers the question. Our program executor works in a symbolic and deterministic manner. This feature ensures a transparent execution trace of the program. Our program executor has a fully differentiable design w.r.t. the visual representations and the concept representations, which supports gradient-based optimization during training. Visual perception. Shown in Figure 2, given the input image, we use a pretrained Mask R-CNN to generate object proposals for all objects. The bounding box for each single object paired with the original image is then sent to a ResNet-34 to extract the region-based (by RoI Align) and image-based features respectively. We concatenate them to represent each object. Here, the inclusion of the representation of the full scene adds the contextual information, which is essential for the inference of relative attributes such as size or spatial position. Concept quantization. Visual reasoning requires determining an object's attributes (e.g., its color or shape). We assume each visual attribute (e.g., shape) contains a set of visual concept (e.g., Cube). In NS-CL, visual attributes are implemented as neural operators, mapping the object representation into an attribute-specific embedding space. FIG1 shows an inference an object's shape. Visual concepts that belong to the shape attribute, including Cube, Sphere and Cylinder, are represented as vectors in the shape embedding space. These concept vectors are also learned along the process. We measure the cosine distances ·, · between these vectors to determine the shape of the object. Specifically, we compute the probability that an object o i is a cube by σ ShapeOf(o i), v Cube − γ τ, where ShapeOf(·) denotes the neural operator, v Cube the concept embedding of Cube and σ the Sigmoid function. γ and τ are scalar constants for scaling and shifting the values of similarities. We classify relational concepts (e.g., Left) between a pair of objects similarly, except that we concatenate the visual representations for both objects to form the representation of their relation. DSL and semantic parsing. The semantic parsing module translates a natural language question into an executable program with a hierarchy of primitive operations, represented in a domain-specific language (DSL) designed for VQA. The DSL covers a set of fundamental operations for visual reasoning, such as filtering out objects with certain concepts or query the attribute of an object. The operations share the same input and output interface, and thus can be compositionally combined to form programs of any complexity. We include a complete specification of the DSL used by our framework in the Appendix A. Our semantic parser generates the hierarchies of latent programs in a sequence to tree manner BID11. We use a bidirectional GRU BID7 to encode an input question, which outputs a fixed-length embedding of the question. A decoder based on GRU cells is applied to the embedding, and recovers the hierarchy of operations as the latent program. Some operations takes concepts their parameters, such as Filter(Red) and Query(Shape). These concepts are chosen from all concepts appeared in the input question. Figure 4 (B) shows an example, while more details can be found in Appendix B.Quasi-symbolic program execution. Given the latent program recovered from the question in natural language, a symbolic program executor executes the program and derives the answer based on the object-based visual representation. Our program executor is a collection of deterministic functional modules designed to realize all logic operations specified in the DSL. Figure 4 (B) shows an illustrative execution trace of a program. To make the execution differentiable w.r.t. visual representations, we represent the intermediate in a probabilistic manner: a set of objects is represented by a vector, as the attention mask over all objects in the scene. Each element, Mask i ∈ denotes the probability that the i-th object of the scene belongs to the set. For example, shown in Figure 4 (B), the first Filter operation outputs a mask of length 4 (there are in total 4 objects in the scene), with each element representing the probability that the corresponding object is selected out (i.e., the probability that each object is a green cube). The output "mask" on the objects will be fed into the next module (Relate in this case) as input and the execution of programs continues. The last module outputs the final answer to the question. We refer interested readers to Appendix C for the implementation of all operators. Optimization objective. The optimization objective of NS-CL is composed of two parts: concept learning and language understanding. Our goal is to find the optimal parameters Θ v of the visual perception module Perception (including the ResNet-34 for extracting object features, attribute operators. and concept embeddings) and Θ s of the semantic parsing module SemanticParse, to maximize the likelihood of answering the question Q correctly: DISPLAYFORM0 where P denotes the program, A the answer, S the scene, and Executor the quasi-symbolic executor. The expectation is taken over P ∼ SemanticParse(Q; Θ s).Recall the program executor is fully differentiable w.r.t. the visual representation. We compute the gradient w.r.t. DISPLAYFORM1 We use RE-INFORCE to optimize the semantic parser Θ s: DISPLAYFORM2, where the reward r = 1 if the answer is correct and 0 otherwise. We also use off-policy search to reduce the variance of REINFORCE, the detail of which can be found in Appendix D.Curriculum visual concept learning. Motivated by human concept learning as in Figure 1, we employ a curriculum learning approach to help joint optimization. We heuristically split the training samples into four stages (Figure 4 (A)): first, learning object-level visual concepts; second, learning relational questions; third, learning more complex questions with perception modules fixed; fourth, joint fine-tuning of all modules. We found that this is essential to the learning of our neuro-symbolic concept learner. We include more technical details in Appendix E. We demonstrate the following advantages of our NS-CL. First, it learns visual concepts with remarkable accuracy; second, it allows data-efficient visual reasoning on the CLEVR dataset (a); third, it generalizes well to new attributes, visual composition, and language domains. We train NS-CL on 5K images (<10% of CLEVR's 70K training images). We generate 20 questions for each image for the entire curriculum learning process. The Mask R-CNN module is pretrained on 4K generated CLEVR images with bounding box annotations, following. Classification-based concept evaluation. Our model treats attributes as neural operators that map latent object representations into an attribute-specific embedding space (FIG1). We evaluate the concept quantization of objects in the CLEVR validation split. Our model can achieve near perfect classification accuracy (∼99%) for all object properties, suggesting it effectively learns generic concept representations. The for spatial relations is relatively lower, because CLEVR does not have direct queries on the spatial relation between objects. Thus, spatial relation concepts can only be learned indirectly. Count-based concept evaluation. The SOTA methods do not provide interpretable representation on individual objects (a; ;). To evaluate the visual concepts learned by such models, we generate a synthetic question set. The diagnostic question set contains simple questions as the following form: "How many red objects are there?". We evaluate the performance on all concepts appeared in the CLEVR dataset. Table 2 summarizes the compared with strong baselines, including methods based on convolutional features (b) and those based on neural attentions . Our approach outperforms IEP by a significant margin (8%) and attention-based baselines by >2%, suggesting object-based visual representations and symbolic reasoning helps to interpret visual concepts. NS-CL jointly learns visual concepts, words and semantic parsing by watching images and reading paired questions and answers. It can be directly applied to VQA. Table 2: We also evaluate the learned visual concepts using a diagnostic question set containing simple questions such as "How many red objects are there?". NS-CL outperforms both convolutional and attentional baselines. The suggested object-based visual representation and symbolic reasoning approach perceives better interpretation of visual concepts. Table 3: We compare different variants of baselines for a systematic study on visual features and data efficiency. Using only 10% of the training images, our model is able to achieve a comparable with the baselines trained on the full dataset. See the text for details. TAB5 summarizes on the CLEVR validation split. Our model achieves the state-of-theart performance among all baselines using zero program annotations, including MAC and FiLM . Our model achieves comparable performance with the strong baseline TbD-Nets , whose semantic parser is trained using 700K programs in CLEVR (ours need 0). The recent NS-VQA model from Yi et al. FORMULA0 achieves better performance on CLEVR; however, their system requires annotated visual attributes and program traces during training, while our NS-CL needs no extra labels. Here, the visual perception module is pre-trained on ImageNet BID9. Without pretraining, the concept learning accuracies drop by 0.2% on average and the QA accuracy drops by 0.5%. Meanwhile, NS-CL recovers the underlying programs of questions accurately (> 99.9% accuracy). NS-CL can also detect ambiguous or invalid programs and indicate exceptions. Please see Appendix F for more details. NS-CL can also be applied to other visual reasoning testbeds. Please refer to Appendix G.1 for our on the Minecraft dataset .For a systematic study on visual features and data efficiency, we implement two variants of the baseline models: TbD-Object and MAC-Object. Inspired by BID1, instead of the input image, TbD-Object and MAC-Object take a stack of object features as input. TbD-Mask and MAC-Mask integrate the masks of objects by using them to guide the attention over the images. Table 3 summarizes the . Our model outperforms all baselines on data efficiency. This comes from the full disentanglement of visual concept learning and symbolic reasoning: how to execute program instructions based on the learned concepts is programmed. TbD-Object and MAC-Object demonstrate inferior in our experiments. We attribute this to the design of model architectures and have a detailed analysis in Appendix F.3. Although TbD-Mask and MAC-Mask do not perform better than the originals, we find that using masks to guide attentions speeds up the training. Besides achieving a competitive performance on the visual reasoning testbeds, by leveraging both object-based representation and symbolic reasoning, out model learns fully interpretable visual concepts: see Appendix H for qualitative on various datasets. Generalizing to new visual compositions. The CLEVR-CoGenT dataset is designed to evaluate models' ability to generalize to new visual compositions. It has two splits: Split A only contains gray, blue, brown and yellow cubes, but red, green, purple, and cyan cylinders; split B imposes the opposite color constraints on cubes and cylinders. If we directly learn visual concepts on split A, it overfits to classify shapes based on the color, leading to a poor generalization to split B.Our solution is based on the idea of seeing attributes as operators. Specifically, we jointly train the concept embeddings (e.g., Red, Cube, etc.) as well as the semantic parser on split A, keeping pretrained, frozen attribute operators. As we learn distinct representation spaces for different attributes, our model achieves an accuracy of 98.8% on split A and 98.9% on split B. Figure 6: Samples collected from four splits in Section 4.3 for illustration. Models are trained on split A but evaluated on all splits for testing the combinatorial generalization. Generalizing to new visual concepts. We expect the process of concept learning can take place in an incremental manner: having learned 7 different colors, humans can learn the 8-th color incrementally and efficiently. To this end, we build a synthetic split of the CLEVR dataset to replicate the setting of incremental concept learning. Split A contains only images without any purple objects, while split B contains images with at least one purple object. We train all the models on split A first, and finetune them on 100 images from split B. We report the final QA performance on split B's validation set. All models use a pre-trained semantic parser on the full CLEVR dataset. Our model performs a 93.9% accuracy on the QA test in Split B, outperforming the convolutional baseline IEP (b) and the attentional baseline TbD by 4.6% and 6.1% respectively. The acquisition of Color operator brings more efficient learning of new visual concepts. Having learned visual concepts on small-scale scenes (containing only few objects) and simple questions (only single-hop questions), we humans can easily generalize the knowledge to larger-scale scenes and to answer complex questions. To evaluate this, we split the CLEVR dataset into four parts: Split A contains only scenes with less than 6 objects, and questions whose latent programs having a depth less than 5; Split B contains scenes with less than 6 objects, but arbitrary questions; Split C contains arbitrary scenes, but restricts the program depth being less than 5; Split D contains arbitrary scenes and questions. Figure 6 shows some illustrative samples. As VQA baselines are unable to count a set of objects of arbitrary size, for a fair comparison, all programs containing the "count" operation over > 6 objects are removed from the set. ForCaption: There is a big yellow cylinder in front of a gray object.(a) An illustrative pair of image and caption in our synthetic dataset. Table 5: We introduce a new simple DSL for image-caption retrieval to evaluate how well the learned visual concepts transfer. Due to the difference between VQA and caption retrieval, VQA baselines are only able to infer the on a partial set of data. The learned object-based visual concepts can be directly transferred into the new domain for free. methods using explicit program semantics, the semantic parser is pre-trained on the full dataset and fixed. Methods with implicit program semantics learn an entangled representation for perception and reasoning, and cannot trivially generalize to more complex programs. We only use the training data from the Split A and then quantify the generalization ability on other three splits. Shown in Table 5, our NS-CL leads to almost-perfect generalization to larger scenes and more complex questions, outperforming all baselines by at least 4% in QA accuracy. The learned visual concepts can also be used in other domains such as image retrieval. With the visual scenes fixed, the learned visual concepts can be directly transferred into the new domain. We only need to learn the semantic parsing of natural language into the new DSL.We build a synthetic dataset for image retrieval and adopt a DSL from scene graph-based image retrieval . The dataset contains only simple captions: "There is an <object A> <relation> <object B>." (e.g., There is a box right of a cylinder). The semantic parser learns to extract corresponding visual concepts (e.g., box, right, and cylinder) from the sentence. The program can then be executed on the visual representation to determine if the visual scene contains such relational triples. For simplicity, we treat retrieval as classifying whether a relational triple exists in the image. This functionality cannot be directly implemented on the CLEVR VQA program domain, because questions such as "Is there a box right of a cylinder" can be ambiguous if there exist multiple cylinders in the scene. Due to the entanglement of the visual representation with the specific DSL, baselines trained on CLEVR QA can not be directly applied to this task. For a fair comparison with them, we show the in Table 5b on a subset of the generated image-caption pairs where the underlying programs have no ambiguity regarding the reference of object B. A separate semantic parser is trained for the VQA baselines, which translates captions into a CLEVR QA-compatible program (e.g., Exist(Filter(Box, Relate(Right, Filter(Cylinder))). Table 5c compares our NS-CL against typical image-text retrieval baselines on the full image-caption dataset. Without any annotations of the sentence semantics, our model learns to parse the captions into the programs in the new DSL. It outperforms the CNN-LSTM baseline by 30%. We further conduct experiments on MS-COCO images. Results are presented on the VQS dataset BID13. VQS contains a subset of images and questions from the original VQA 1.0 dataset BID3. All questions in the VQS dataset can be visually grounded: each question is associated with multiple image regions, annotated by humans as essential for answering the question. Figure 7 illustrates an execution trace of NS-CL on VQS.We use a syntactic dependency parser to extract programs and concepts from language BID2 ). The object proposals and features are extracted from models pre-trained on the MS-COCO dataset and the ImageNet dataset, respectively. Illustrated in Figure 7, our model FIG2 shows examples of the learned visual concepts, including object categories, attributes, and relations. Experiment setup and implementation details are in Appendix G.2.In this paper, we focus on a neuro-symbolic framework that learns visual concepts about object properties and relations. Indeed, visual question answering requires AI systems to reason about more general concepts such as events or activities . We leave the extension of NS-CL along this direction and its application to general VQA datasets BID3 as future work. We presented a method that jointly learns visual concepts, words, and semantic parsing of sentences from natural supervision. The proposed framework, NS-CL, learns by looking at images and reading paired questions and answers, without any explicit supervision such as class labels for objects. Our model learns visual concepts with remarkable accuracy. Based upon the learned concepts, our model achieves good on question answering, and more importantly, generalizes well to new visual compositions, new visual concepts, and new domain specific languages. The design of NS-CL suggests multiple research directions. First, constructing 3D object-based representations for realistic scenes needs further exploration BID1 BID5. Second, our model assumes a domain-specific language for describing formal semantics. The integration of formal semantics into the processing of complex natural language would be meaningful future work BID4 ). We hope our paper could motivate future research in visual concept learning, language learning, and compositionality. Our framework can also be extended to other domains such as video understanding and robotic manipulation. Here, we would need to discover semantic representations for actions and interactions (e.g., push) beyond static spatial relations. Along this direction, researchers have studied building symbolic representations for skills and learning instruction semantics from interaction in constrained setups. Applying neuro-symbolic learning frameworks for concepts and skills would be meaningful future work toward robotic learning in complex interactive environments. We first introduce the domain-specific language (DSL) designed for the CLEVR VQA dataset (a). Table 6 shows the available operations in the DSL, while TAB9 explains the type system. Scene −→ ObjectSet Return all objects in the scene. Filter (ObjectSet, ObjConcept) −→ ObjectSet Filter out a set of objects having the object-level concept (e.g., red) from the input object set. DISPLAYFORM0 Filter out a set of objects that have the relational concept (e.g., left) with the input object. AERelate (Object, Attribute) −→ ObjectSet (Attribute-Equality Relate) Filter out a set of objects that have the same attribute value (e.g., same color) as the input object. Intersection (ObjectSet, ObjectSet) −→ ObjectSet Return the intersection of two object sets. DISPLAYFORM1 Return the union of two object sets. Query (Object, Attribute) −→ ObjConcept Query the attribute (e.g., color) of the input object. AEQuery (Object, Object, Attribute) −→ Bool (Attribute-Equality Query) Query if two input objects have the same attribute value (e.g., same color). DISPLAYFORM2 Query if the set is empty. DISPLAYFORM3 Query the number of objects in the input set. CLessThan (ObjectSet, ObjectSet) −→ Bool (Counting LessThan) Query if the number of objects in the first input set is less than the one of the second set. CGreaterThan (ObjectSet, ObjectSet) −→ Bool (Counting GreaterThan) Query if the number of objects in the first input set is greater than the one of the second set. CEqual (ObjectSet, ObjectSet) −→ Bool (Counting Equal) Query if the number of objects in the first input set is the same as the one of the second set. Table 6: All operations in the domain-specific language for CLEVR VQA.We note that some function takes Object as its input instead of ObjectSet. These functions require the uniqueness of the referral object. For example, to answer the question " What's the color of the red object?", there should be one and only one red object in the scene. During the program execution, the input object set will be implicitly cast to the single object (if the set is non-empty and there is only one object in the set). Such casting is named Unique in related works (b The type system of the domain-specific language for CLEVR VQA. As shown in Appendix A, a program can be viewed as a hierarchy of operations which take concepts as their parameters. Thus, NS-CL generates the hierarchies of latent programs in a sequence to tree manner BID11 . The semantic parser adopts an encoder-decoder architecture, which contains four neural modules: a bidirectional GRU encoder IEncoder BID7 to encode an input question into a fixed-length embedding, an operation decoder OpDecoder that determines the operation tokens, such as Filter, in the program based on the sentence embedding, a concept decoder ConceptDecoder that selects concepts appeared in the input question as the parameters for certain operations (e.g., Filter takes an object-level concept parameter while Query takes an attribute), and a set of output encoders {OEncoder i} which encode the decoded operations by OpDecoder and output the latent embedding for decoding the next operation. The operation decoder, the concept decoder, and the output encoders work jointly and recursively to generate the hierarchical program layout. Algorithm 1 illustrates the algorithmic outline of the semantic parser. Algorithm 1: The String-to-Tree Semantic Parser. DISPLAYFORM0 The function parse takes two inputs: the current decoding state f and all concepts appeared in the question, as a set {c i}. The parsing procedure begins with encoding the input question by IEncoder as f 0, extracting the concept set {c i} from the input question, and invoking parse(f 0, {c i}).The concept set {c i} is extracted using hand-coded rules. We assume that each concept (including object-level concepts, relational concepts, and attributes) is associated with a single word in the question. For example, the word "red" is associated with the object-level concept Red, while the word "shape" is associated with the attribute Shape. Informally, we call these words concept words. For a given question Q, the corresponding concept set {c i} is composed of all occurrences of the concept words in Q. The set of concept words is known for the CLEVR dataset. For natural language questions, one could run POS tagging to find all concept words BID2 ). We leave the automatic discovery of concept words as a future work BID15. We use the word embedding of the concept words as the representation for the concepts {c i}. Note that, these "concept embeddings" are only for the program parsing. The visual module has separate concept embeddings for aligning object features with concepts in the visual-semantic space. We now delve into the main function parse(f, {c i}): we first decode the root operation op of the hierarchy by OpDecoder(f). If op requires a concept parameter (an object-level concept, a relational concept, or an attribute), ConceptDecoder will be invoked to choose a concept from all concepts {c i}. Assuming op takes two non-concept inputs (e.g., the operation Intersection takes two object sets as its input), there will be two branches for this root node. Thus, two output encoders OEncoder 0 and OEncoder 1 will be applied to transform the current state f into two sub-states f 1 and f 2. parse will be recursively invoked based on f 1 and f 2 to generate the two branches respectively. In the DSL, the number of non-concept inputs for any operation is at most 2.In our implementation, the input encoder IEncoder first maps each word in the question into an embedding space. The word embeddings are composed of two parts: a randomly initialized word embedding of dimension 256 and a positional embedding of dimension 128 . For a concept word, its word embedding only depends on which type it belongs to (i.e. object-level, relational or attribute). Thus, after being trained on a fixed dataset, the semantic parser can parse questions with novel (unseen) concept words. The sequence of word embeddings is then encoded by a two-layer GRU with a hidden dimension of 256 * 2 (bidirectional). The function parse starts from the last hidden state of the GRU, and works recursively to generate the hierarchical program layout. Both OpDecoder and ConceptDecoder are feed-forward networks. ConceptDecoder performs attentions over the representations of all concepts {c i} to select the concepts. Output encoders OEncoder 0 and OEncoder 1 are implemented as GRU cells. Another pre-processing of the sentence is to group consecutive object-level concept words into a group and treat them together as a single concept, inspired by the notion of "noun phrases" in natural languages. The computational intuition behind this grouping is that, the latent programs of CLEVR questions usually contain multiple consecutive Filter tokens. During the program parsing and execution, we aim to fuse all such Filters into a single Filter operation that takes multiple concepts as its parameter. A Running Example As a running example, consider again the question " What is the color of the cube right of the red matte object?". We first process the sentence (by rules) as: "What is the <Attribute 1 (color)> of the <(ObjConcept 1 (cube)> <RelConcept 1 (right)> of the <ObjConcept 2 (red matte object)>?". The expected parsing of this sentence is: DISPLAYFORM1 The semantic parser encode the word embeddings with IEncoder. The last hidden state of the GRU will be used as f 0. The word embeddings of the concept words form the set {c i} = {Attribute 1, ObjConcept 1, RelConcept 1, ObjConcept 2}. The function parse is then invoked recursively to generate the hierarchical program layout. Table 8 illustrates the decoding process step-by-step. In this section, we present the implementation of all operations listed in Table 6. We start from the implementation of Object-typed and ObjectSet-typed variables. Next, we discuss how to classify objects by object-level concepts or relational concept, followed by the implementation details of all operations. Object-typed and ObjectSet-typed variables. We consider a scene with n objects. An Objecttyped variable can be represented as a vector Object of length n, where Object i ∈ and i Object i = 1. Object i can be interpreted as the probability that the i-th object of the scene is being referred to. Similarly, an ObjectSet-typed variable can be represented as a vector ObjectSet of length n, where ObjectSet i ∈. ObjectSet i can be interpreted as the probability that the i-the object is in the set. To cast an ObjectSet-typed variable ObjectSet as an Object-typed variableStep Inputs Outputs Recursive Invocation DISPLAYFORM0 (End of branch.) Table 8: A step-by-step running example of the recursive parsing procedure. The parameter {c i} is omitted for better visualization. Object (i.e., the Unique operation), we compute: Object = softmax(σ −1 (ObjectSet)), where σ −1 (x) = log(x/(1 − x)) is the logit function. Denote o i as the visual representation of the i-th object, OC the set of all object-level concepts, and A the set of all object-level attributes. Each object-level concept oc (e.g., Red) is associated with a vector embedding v oc and a L1-normalized vector b oc of length |A|. b oc represents which attribute does this object-level concept belong to (e.g., the concept Red belongs to the attribute Color). All attributes a ∈ A are implemented as neural operators, denoted as u a (e.g., uColor). To classify the objects as being Red or not, we compute: DISPLAYFORM0 where σ denotes the Sigmoid function, ·, · the cosine distance between two vectors. γ and τ are scalar constants for scaling and shifting the values of similarities. By applying this classifier on all objects we will obtain a vector of length n, denoted as ObjClassify(Red). Similarly, such classification can be done for relational concepts such as Left. This will in an n × n matrix RelClassify(Left), where RelClassify(Left) j,i is the probability that the object i is left of the object j. To classify whether two objects have the same attribute (e.g., have the same Color), we compute: DISPLAYFORM1 We can obtain a matrix AEClassify(Color) by applying this classifier on all pairs of objects, where AEClassifier(Color) j,i is the probability that the object i and j have the same Color. Quasi-symbolic program execution. Finally, Table 9 summarizes the implementation of all operators. In practice, all probabilities are stored in the log space for better numeric stability. To tackle the optimization in a non-smooth program space, we apply an off-policy program search process to facilitate the learning of the semantic parser. Denote P(s) as the set of all valid programs in the CLEVR DSL for the input question s. We want to compute the gradient w.r.t. Θ s, the parameters of the semantic parser: DISPLAYFORM0 i ) DISPLAYFORM1 i | + γc)/(γc · τc) Table 9: All operations in the domain-specific language for CLEVR VQA. γ c = 0.5 and τ c = 0.25 are constants for scaling and shift the probability. During inference, one can quantify all operations as.where P ∼ SemanticParse(s; Θ s). In REINFORCE, we approximate this gradient via Monte Carlo sampling. An alternative solution is to exactly compute the gradient. Note that in the definition of the reward r, only the set of programs Q(s) leading to the correct answer will contribute to the gradient term. With the perception module fixed, the set Q can be efficiently determined by an off-policy exhaustive search of all possible programs P(s). In the third stage of the curriculum learning, we search for the set Q offline based on the quantified of concept classification and compute the exact gradient ∇Θ s. An intuitive explanation of the off-policy search is that, we enumerate all possible programs, execute them on the visual representation, and find the ones leading to the correct answer. We use Q(s) as the "groundtruth" program annotation for the question, to supervise the learning, instead of running the Monte Carlo sampling-based REINFORCE.Spurious program suppression. However, directly using Q(s) as the supervision by computing = p∈Q(S) − log Pr(p) can be problematic, due to the spuriousness or the ambiguity of the programs. This comes from two aspects: 1) intrinsic ambiguity: two programs are different but equivalent. For example P1: AEQuery(Color, Filter(Cube), Filter(Sphere)) and P2: Exist(Filter(Sphere, AERelate(Color, Filter(Cube)))) are equivalent. 2) extrinsic spuriousness: one of the program is incorrect, but also leads to the correct answer in a specific scene. For example, P1: Filter(Red, Relate(Left, Filter(Sphere))) and P2: Filter(Red, Relate(Left, Filter(Cube))) may refer to the same red object in a specific scene. Motivated by the REINFORCE process, to suppress such spurious programs, we use the loss function: DISPLAYFORM2 The corresponding gradient ∇ Θs is, DISPLAYFORM3 The key observation is that, given a sufficiently large set of scenes, a program can be identified as spurious if there exists at least one scene where the program leads to a wrong answer. As the training goes, spurious programs will get less update due to the sampling importance term Pr[p] which weights the likelihood maximization term. During the whole training process, we gradually add more visual concepts and more complex question examples into the model. Summarized in Figure 4 (A), in general, the whole training process is split into 3 stages. First, we only use questions from lesson 1 to let the model learn object-level visual concepts. Second, we train the model to parse simple questions and to learn relational concepts. In this step, we freeze the neural operators and concept embeddings of object-level concepts. Third, the model gets trained on the full question set (lesson 3), learning to understand questions of different complexities and various format. For the first several iterations in this step, we freeze the parameters in the perception modules. In addition, during the training of all stages, we gradually increase the number of objects in the scene: from 3 to 10.We select questions for each lesson in the curriculum learning by their depth of the latent program layout. For eaxmple, the program "Query(Shape, Filter(Red, Scene))" has the depth of 3, while the program "Query(Shape, Filter(Cube, Relate(Left, Filter(Red, Scene))))" has the depth of 5. Since we have fused consecutive Filter operations into a single one, the maximum depth of all programs is 9 on the CLEVR dataset. We now present the detailed split of our curriculum learning lessons:For lesson 1, we use only programs of depth 3. It contains three types of questions: querying an attribute of the object, querying the existence of a certain type of objects, count a certain type of objects, and querying if two objects have the same attribute (e.g., of the same color). These questions are almost about fundamental object-based visual concepts. For each image, we generate 5 questions of lesson 1.For lesson 2, we use programs of depth less than 5, containing a number of questions regarding relations, such as querying the attribute of an object that is left of another object. We found that in the original CLEVR dataset, all Relate operations are followed by a Filter operation. This setup degenerates the performance of the learning of relational concepts such as Left. Thus, we add a new question template into the original template set: Count(Relate( ·, Filter( ·, Scene))) (e.g., "What's the number of objects that are left of the cube?"). For each image, we generate 5 questions of lesson 2.For lesson 3, we use the full CLEVR question set. Curriculum learning is crucial for the learning of our neuro-symbolic concept learner. We found that by removing the curriculum setup w.r.t. the number of object in the scenes, the visual perception module will get stuck at an accuracy that is similar to a random-guess model, even if we only use stage-1 questions. If we remove the curriculum setup w.r.t. the complexity of the programs, the joint training of the visual perception module and the semantic parser can not converge. We conduct ablation studies on the accuracy of semantic parsing, the impacts of the ImageNet pretraining of visual perception modules, the data efficiency of our model, and the usage of object-based representations. F.1 SEMANTIC PARSING ACCURACY.We evaluate how well our model recovers the underlying programs of questions. Due to the intrinsic equivalence of different programs, we evaluate the accuracy of programs by executing them on the ground-truth annotations of objects. Invalid or ambiguous programs are also considered as incorrect. Our semantic parser archives > 99.9% QA accuracy on the validation split. The only extra supervision of the visual perception module comes from the pre-training of the perception modules on ImageNet BID9. To quantify the influence of this pre-training, we conduct ablation experiments where we randomly initialize the perception module following. The classification accuracies of the learned concepts almost remain the same except for Shape. The classification accuracy of Shape drops from 98.7 to 97.5 on the validation set while the overall QA accuracy on the CLEVR dataset drops to 98.2 from 98.9. We speculate that large-scale image recognition dataset can provide prior knowledge of shape. In this section, we study whether and how the number of training samples and feature representations affect the overall performance of various models on the CLEVR dataset. Specifically, we compare the proposed NS-CL against two strong baselines: TbD and MAC .Baselines. For comparison, we implement two variants of the baseline models: TbD-Object and MAC-Object. Inspired by BID1, instead of using a 2D convolutional feature map, TbD-Object and MAC-Object take a stack of object features as inputs, whose shape is k × d obj. k is the number of objects in the scene, and d obj is the feature dimension for a single object. In our experiments, we fix k = 12 as a constant value. If there are fewer than 12 objects in the scene, we add "null" objects whose features are all-zero vectors. We extract object features in the same way as NS-CL. Features are extracted from a pre-trained ResNet-34 network before the last residual block for a feature map with high resolution. For each object, its feature is composed of two parts: region-based (by RoI Align) and image-based features. We concatenate them to represent each object. As discussed, the inclusion of the representation of the full scene is essential for the inference of relative attributes such as size or spatial position on the CLEVR domain. TbD and MAC networks are originally designed to use image-level attention for reasoning. Thus, we implement two more baselines: TbD-Mask and MAC-Mask. Specifically, we replace the original attention module on images with a mask-guided attention. Denotes the union of all object masks as M. Before the model applies the attention on the input image, we multiply the original attention map computed by the model with this mask M. The multiplication silences the attention on pixels that are not part of any objects. Results. Table 3 summarizes the . We found that TbD-Object and MAC-Object approach show inferior compared with the original model. We attribute this to the design of the network architectures. Take the Relate operation (e.g., finds all objects left of a specific object x) as an example. TbD uses a stack of dilated convolutional layers to propagate the attention from object x to others. In TbD-Object, we replace the stack of 2D convolutions by several 1D convolution layers, operating over the k × d obj object features. This ignores the equivalence of objects (the order of objects should not affect the ). In contrast, MAC networks always use the attention mechanism to extract information from the image representation. This operation is invariant to the order of objects, but is not suitable for handling quantities (e.g., counting objects).As for TbD-Mask and MAC-Mask, although the mask-guided attention does not improve the overall performance, we have observed noticeably faster convergence during model training. TbD-Mask and MAC-Mask leverage the prior knowledge of object masks to facilitate the attention. Such prior has also been verified to be effective in the original TbD model: TbD employs an attention regularization during training, which encourages the model to attend to smaller regions. In general, NS-CL is more data-efficient than MAC networks and TbD. Recall that NS-CL answers questions by executing symbolic programs on the learned visual concepts. Only visual concepts (such as Red and Left) and the interpretation of questions (how to translate questions into executable programs) need to be learned from data. In contrast, both TbD and MAC networks need to additionally learn to execute (implicit or explicit) programs such as counting. For the experiments on the full CLEVR training set, we split 3,500 images (5% of the training data) as the hold-out validation set to tune the hyperparameters and select the best model. We then apply this model to the CLEVR validation split and report the testing performance. Our model reaches an accuracy of 99.2% using the CLEVR training set. We also extend the experiments to a new reasoning testbed: Minecraft worlds . The Minecraft reasoning dataset differs from CLEVR in both visual appearance and question types. FIG3 gives an example instance from the dataset. Besides different 3D visual appearance and image contexts, the Minecraft reasoning dataset introduces two new types of reasoning operations. We add them to our domain-specific language:1. FilterMost(ObjectSet, Concept) → ObjectSet: Given a set of objects, finds the "most" one. For example, FilterMost(Closest, set) locates the object in the input set that is cloest to the camera (e.g., what is the direction of the closest animal?) 2. BelongTo(Object, ObjectSet) → Bool: Query if the input object belongs to a set. Results. TAB11 summarizes the and FIG5 shows sample execution traces. We compare our method against the NS-VQA baseline , which uses strong supervision for both scene representation (e.g., object categories and positions) and program traces. In contrast, our method learns both by looking at images and reading question-answering pairs. NS-CL outperforms NS-VQA by 5% in overall accuracy. We attribute the inferior of NS-VQA to its derendering module. Because objects in the Minecraft world usually occlude with each other, the detected object bounding boxes are inevitably noisy. During the training of the derendering module, each detected bounding box is matched with one of the ground-truth bounding boxes and uses its class and pose as supervision. Poorly localized bounding boxes lead to noisy labels and hurt the accuracy of the derendering module. This further influences the overall performance of NS-VQA. We conduct experiments on the VQS dataset BID13. VQS is a subset of the VQA 1.0 dataset BID3 On the right, we show the original question and answer in natural language, as well as the latent program recovered by our parser. To answer this question, models are expected to attend to the man and his pen in the pocket. Setup. All models are trained on the first 63,509 images of the training set, and tested on the test split. For hyper-parameter tuning and model selection, the rest 5,000 images from the training set are used for validation. We use the multiple-choice setup for VQA: the models choose their most confident answer from 18 candidate answers for each question. To obtain the latent programs from natural languages, we use a pre-trained syntactic dependency parser BID2 ) for extracting programs and concepts that need to be learned. A sample question and the program obtained by our parser is shown in FIG4. The concept embeddings are initialized by the bag of words (BoW) over the GloVe word embeddings .Baselines. We compare our model against two representative baselines: MLP and MAC .MLP is a standard baseline for visual-question answering, which treats the multiple-choice task as a ranking problem. For a specific candidate answer, a multi-layer perceptron (MLP) model is used to encode a tuple of the image, the question, and the candidate answer. The MLP outputs a score for each tuple, and the answer to the question is the candidate with the highest score. We encode the image with a ResNet-34 pre-trained on ImageNet and use BoW over the GloVe word embeddings for the question and option encoding. We slightly modify the MAC network for the VQS dataset. For each candidate answer, we concatenate the question and the answer as the input to the model. The MAC model outputs a score from 0 to 1 and the answer to the question is the candidate with the highest score. The image features are extracted from the same ResNet-34 model. Results. TAB9 summarizes the . NS-CL achieves comparable with the MLP baseline and the MAC network designed for visual reasoning. Our model also brings transparent reasoning over natural images and language. Example execution traces generated by NS-CL are shown in FIG1. Besides, the symbolic reasoning process helps us to inspect the model and diagnose the error sources. See the caption for details. Another appealing benefit is that our reasoning model enjoys full interpretability. Figure 11, FIG5, and FIG1 show NS-CL's execution traces on CLEVR, Minecraft, and VQS, respectively. As a side product, our system detects ambiguous and invalid programs and throws out exceptions. As an example (Figure 11), the question "What's the color of the cylinder?" can be ambiguous if there are multiple cylinders or even invalid if there are no cylinders. Figure 14 and FIG6 include qualitative visualizations of the concepts learned from the CLEVR and Minecraft datasets, including object categories, attributes, and relations. We choose samples from the validation or test split of each dataset by generating queries of the corresponding concepts. We set a threshold to filter the returned images and objects. For quantitative evaluations of the learned concepts on the CLEVR dataset, please refer to Table 2 and Table 5 Figure 11: Visualization of the execution trace generated by our Neuro-Symbolic Concept Learner on the CLEVR dataset. Example A and B are successful executions that generate correct answers. In example C, the execution aborts at the first operator. To inspect the reason why the execution engine fails to find the corresponding object, we can read out the visual representation of the object, and locate the error source as the misclassification of the object material. Example D shows how our symbolic execution engine can detect invalid or ambiguous programs during the execution by performing sanity checks. FIG1: Illustrative execution trace generated by our Neuro-Symbolic Concept Learner on the VQS dataset. Execution traces A and B shown in the figure leads to the correct answer to the question. Our model effectively learns visual concepts from data. The symbolic reasoning process brings transparent execution trace and can easily handle quantities (e.g., object counting in Example A). In Example C, although NS-CL answers the question correctly, it locates the wrong object during reasoning: a dish instead of the cake. In Example D, our model misclassifies the sport as frisbee.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgMlhRctm
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them.
Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty. However, it is challenging to specify a meaningful and tractable prior over the network parameters, and deal with the weight correlations in the posterior. To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for the network parameters based on recently introduced unit embeddings that can flexibly encode weight structures, and (ii) input-dependent contextual variables for the weight prior that can provide convenient ways to regularize the function space being modeled by the network through the use of kernels. We show these models provide desirable test-time uncertainty estimates, demonstrate cases of modeling inductive biases for neural networks with kernels and demonstrate competitive predictive performance on an active learning benchmark. The question of which priors one should use for Bayesian neural networks is largely unanswered, as two considerations need to be balanced: First, we want to keep inference in the high dimensional weight posterior tractable; Second, we desire to express our beliefs about the properties of the modeled functions compactly by modeling the collection of weights. Especially the latter is typically hard, as functional regularization for weight-based models is non-trivial. In order to cope with richer posterior inference than mean-field typically achieves, a variety of structured posterior models have been proposed recently, for instance utilizing radial posteriors , or rich weight posteriors based on Gaussian processes . When it comes to modeling priors on weights with correlations, recent work has attempted to capture feature-level correlations using for instance a horseshoe prior . One interesting direction of inquiry has focused on utilizing hyper-networks in order to model distributions over weights for an entire network , or alternatively to utilize unit-level level variables combined with compact hyper-networks to regress to single weights and capture weight correlations through the auxiliary variables . We propose to tackle some of the challenges in modeling weight priors by extending the latter work and combining it with ideas from the Gaussian process literature to replace the hyper-network with a Gaussian process prior over weights. We explore the use of compositional kernels to add input-dependence to the prior for our model and obtain rich models with beneficial properties in tasks such as active learning, and generalization, while maintaining tractable inference properties. In each unit (visible or hidden) of the l-th layer of the network has a corresponding latent hierarchical variable z l,i, of dimensions D z, where i denotes the index of the unit in a layer. These latent variables are used to construct the weights in the network such that a weight in the l-th weight layer, w l,i,j is linked to the latent variables z's of the i-th input unit and the j-th output unit of the weight layer. We can summarize this relationship by introducing a set of weight encodings, C w (z), one for each individual weight, c w l,i,j = z l+1,i, z l,j. The probabilistic description of the relationship between the weight codes and the weights w is:, where l denotes a visible or hidden layer and H l is the number of units in that layer, and w denotes all the weights in this network. In , a small parametric neural network regression model maps the latent variables to the weights,. We will call this network a meta mapping. We assume p(z) = N (z; 0, I). We can thus write down the joint density of the ing hierarchical model as follows, Variational inference was employed in prior work to infer z (and w implicitly), and to obtain a point estimate of θ, as a by-product of optimising the variational lower bound. Notice that in Sec.2, the meta mapping from the hierarchical latent variables to the weights is a parametric non-linear function, specified by a neural network. We replace the parametric neural network by a probabilistic functional mapping and place a nonparametric Gaussian process (GP) prior over this function. That is, where we have assumed a zero-mean GP, k γ (·, ·) is a covariance function and γ is a small set of hyper-parameters. The effect is that the latent function introduces correlations for the individual weight predictions, Notably, while the number of latent variables and weights can be large, the input dimension to the GP mapping is only 2D z, where D z is the dimensionality of each latent variable z. The GP mapping effectively performs one-dimensional regression from latent variables to individual weights while capturing their correlations. We will refer to this mapping as a GP-MetaPrior (metaGP). We define the following factorized kernel at the example of two weights in the network, In this section and what follows, we will use the popular exponentiated quadratic (EQ) kernel with ARD lengthscales, are the lengthscales and σ 2 k is the kernel variance. We cover inference and learning in App. A. We first note that whilst the hierarchical latent variables and meta mappings introduce nontrivial coupling between the weights a priori, the weights and latent variables are inherently global. That is, a function drawn from the model, represented by a set of weights, does not take into account the inputs at which the function will be evaluated. To this end, we introduce the input variable into the weight codes c w l,i,j = z l+1,i, z l,j, x n. In turn, this yields input-conditional weight models p(w n,l,i,j |f, z l+1,i, z l,j, x n). We again turn to compositional kernels and introduce a new input kernel K x which we use as follows, As a of having private contextual inputs to the meta mapping, the weight priors are now also local to each data point. We can utilize multiple useful kernels from the GP literature that allow modelers to describe relationships between data, but were previously inaccessible to neural network modelers. We consider this a novel form of functional regularization, as the entire network can be given structure that will constrain its function space. To scale this to large inputs, we learn transformations of inputs for the conditional weight model n = g(Vx n), for a learned mapping V and a nonlinearity g: We write down the joint density of all variables in the model when using our weight prior in a neural network: We discuss inference and learning in the Appendix Sec. A. We study our suggested priors empirically in two distinct settings in the following: first, we study the effect of kernel choice in the local model for a regression problem where we may have available intuitions as inductive biases. Second, we explore how the input-dependence behaves in out of distribution generalization tasks. We explore the utility of the contextual variable towards modeling inductive biases for neural networks and evaluate on predictive performance on a regression example. In particular, we generate 100 training points from a synthetic sinusoidal function and create two test sets that contains in-sample inputs and out-of-sample inputs, respectively. We test an array of models and inference methods, including BNN with MFVI, metaGP and metaGP with contextual variables. We can choose the covariance function to be used for the auxiliary variables to encode our belief about how the weights should be modulated by the input. We pick EQ and periodic kernels in this example. Fig. 2 summarizes the and illustrate the qualitative difference between models. Note that the periodic kernel allows the model to discover and encode periodicity, allowing for more long-range confident predictions compared to that of the EQ kernel. We test the ability of this model class to produce calibrated predictive uncertainty to outof-distribution samples. We first train a neural network classifier with one hidden layer of 100 rectified linear units on the MNIST dataset, and apply the metaGP prior only to the last layer of the network. After training, we compute the entropy of the predictions on various test sets, including notMNIST, fashionMNIST, Kuzushiji-MNIST, and uniform and Gaussian noise inputs. Following , the CDFs of the predictive entropies for various methods are shown in Fig. 3. In most out-of-distribution sets considered, metaGP and metaGP with local auxiliary variables demonstrate competitive performance to Gaussian MFVI. Notably, MAP estimation tends to give wildly poor uncertainty estimates on out-of-distribution samples. We illustrated the utility of a GP-based hierarchical prior over neural network weights and a variational inference scheme that captures weight correlations and allows input-dependent contextual variables. We plan to evaluate the performance of the model on more challenging decision making tasks and to extend the inference scheme to handle continual learning. Appendix A. Appendix: Inference and learning using stochastic structured variational inference Performing inference is challenging due to the non-linearity of the neural network and the need to infer an entire latent function f. To address these problems, we derive a structured variational inference scheme that makes use of innovations from inducing point GP approximation literature (; ; Quiñonero-; ;) and previous work on inferring meta-representations . As a reminder, we write down the joint density of all variables in the model: We first partition the space Z of inputs to the function f into a finite set of M variables called inducing inputs z u and the remaining inputs, Z = {x u, Z =xu}. The function f is partitioned identically, f = {u, f =u}, where u = f (x u). We can then rewrite the GP prior as follows, The inducing inputs and outputs, {x u, u}, will be used to parameterize the approximation. In particular, a variational approximation is judiciously chosen to mirror the form of the joint density: where the variational distribution over w is made to explicitly depend on remaining variables through the conditional prior, and q(z) is chosen to be a diagonal (mean-field) Gaussian densitie, q(z) = N (z; µ µ µ z, diag(σ σ σ 2 z)), and q(u) is chosen to be a correlated multivariate Gaussian, q(u) = N (u; µ µ µ u, Σ u). This approximation allows convenient cancellations yielding a tractable variational lower bound as follows, where the last expectation has been partly approximated using simple Monte Carlo with the reparameterization trick, i.e. z k ∼ q(z). We will next discuss how to approximate the expectation F k = w,f q(w, f |z k) log p(y|w, x). Note that we split f into f =u and u, and that we can integrate f =u out exactly to give, q(w|z k, u) = N (w; A (k) u, B (k) ), At this point, we can either (i) sample u from q(u), or (ii) integrate u out analytically. We opt for the second approach, which gives In contrast to GP regression and classification in which the likelihood term is factorized point-wise w.r.t. the parameters and thus their expectations only involve a low dimensional integral, we have to integrate out w in this case, which is of much higher dimensions. When necessary or practical, we resort to Kronecker factored models or make an additional diagonal approximation as follows, Whilst the diagonal approximation above might look poor from the first glance, it is conditioned on a sample of the latent variables z k and thus the weights' correlations are retained after integrating out z. Such correlation is illustrated in 4 where we show the marginal and conditional covariance structures for the weights of a small neural network, separated into diagonal and full covariance models. The diagonal approximation above has been observed to give pathological behaviours in the GP regression case , but we did not observe these in practice. F k is approximated by F k ≈ wq (w|z k) log p(y|w, x) which can be subsequently efficiently estimated using the local reparameterization trick . The final lower bound is then optimized to obtain the variational parameterers of q(u), q(z), and estimates for the noise in the meta-GP model, the kernel hyper-parameters and the inducing inputs. selection and more crucially using the proposed model and inference scheme seems to yield comparable or better predictive errors with a similar number of queries. This simple setting quantitatively reveals the inferior performance of MFVI, compared to MAP and metaGP.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bylhq134Fr
We introduce a Gaussian Process Prior over weights in a neural network and explore its ability to model input-dependent weights with benefits to various tasks, including uncertainty estimation and generalization in the low-sample setting.
We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation. We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolution. We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese). Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments. Most existing Neural Machine Translation (NMT) models operate on the word or subword-level. Often, these models are memory inefficient because of large vocabulary size. Character-level models instead work directly on raw characters, ing in a more compact language representation, while mitigating out-of-vocabulary (OOV) problems . They are especially suitable for multilingual translation, where multiple languages can be modelled using the same character vocabulary. Multilingual training can lead to improvements in the overall performance without any increase in model complexity . It also circumvents the need to train separate models for each language pair. Models based on self-attention have achieved excellent performance on a number of tasks including machine translation and representation learning . Despite the success of these models, no previous work has considered their suitability for character-level translation, with the In this work, we perform an in-depth investigation of the suitability of self-attention models for character-level translation. We consider two models: the standard transformer from ; as well as a novel variant, which we call the convtransformer (Figure 1, Section 3). The latter uses convolution to facilitate interactions among nearby character representations. We evaluate these models on both bilingual and multilingual translation to English, using up to three input languages: French (FR), Spanish (ES), and Chinese (ZH). We compare their translation performance on close (e.g., FR and ES) and on distant (e.g., FR and ZH) input languages (Section 5.1), and we analyze their learned character alignments (Section 5.2). We find that self-attention models work surprisingly well for character-level translation, performing competitively with equivalent subword-level models while requiring up to 60% fewer parameters. At the character-level, the convtransformer performs better than the standard transformer, converging faster and producing more robust alignments. Fully character-level translation was first tackled in , who proposed a recurrent encoder-decoder model similar to the one in . Their encoder combines convolutional layers with max pooling and highway layers to construct intermediate representations of segments of nearby characters. Their decoder network autoregressively generates the output translation one character at a time, utilizing attention on the encoded representations.'s approach showed promising on multilingual translation in particular. Without any architectural modifications, training on multiple source languages yielded performance improvements while also acting as a regularizer. Multilingual training of character-level models is possible not only for languages that have almost identical character vocabularies, such as French and Spanish, but even for distant languages for which a mapping to a common character-level representation can be made, for example through latinizing Russian or Chinese . More recently, perform an in-depth comparison between different characterand subword-level models. They show that, given sufficient computational time and model capacity, character-level models can outperform subwordlevel models, due to their greater flexibility in processing and segmenting the input and output sequences. The transformer is an attention-driven encoder-decoder model that has achieved state-of-the-art performance on a number of sequence modelling tasks in NLP. Instead of using recurrence, the transformer uses only feedforward layers based on self-attention. The standard transformer architecture consists of six stacked encoder layers that process the input using selfattention and six decoder layers that autoregressively generate the output sequence. Intuitively, attention as an operation is not as meaningful for encoding characters as it is for words. However, recent work on language modelling has surprisingly shown that attention can be very effective for modelling characters, raising the question how well the transformer would work on character-level bilingual and multilingual translation, and what architectures would be suitable for this task. These are the questions this paper sets out to investigate. To facilitate character-level interactions in the transformer, we propose a modification of the standard architecture which we call the convtransformer. In this architecture, we use the same decoder as the standard transformer, but we adapt each encoder block to include an additional subblock. The sub-block (Figure 1, b), inspired from , consists of three parallel 1D convolutional layers. We use separate context window sizes of 3, 5 and 7 for each convolutional layer in order to resemble character interactions of different levels of granularity, similar to subwordor word-level. Finally, we fuse the representations using an additional convolutional layer, ing in an output dimensionality that is identical to the input dimensionality. Therefore, in contrast to , who use max pooling to compress the input character sequence into segments of characters, here we leave the resolution unchanged, for both transformer and convtransformer models. For additional flexibility, we add a residual connection from the input to the output of the convolutional block. Datasets. We conduct experiments on two datasets. First, we use the WMT15 DE→EN dataset, on which we test different model configurations and compare our to previous work on character-level translation. We follow the preprocessing in and use the newstest-2014 dataset for testing. Second, we conduct our main experiments using the United Nations Parallel Corporus (UN) , for two reasons: (i) UN contains a large number of parallel sentences from six languages, allowing us to conduct multilingual experiments; (ii) all sentences in the corpus are from the same domain. We construct our training corpora by randomly sampling one million sentence pairs from the FR, ES, and ZH parts of the UN dataset, targeting translation to English. To construct multilin- Table 2: BLEU scores on the UN dataset, for different input training languages (first column), and evaluated on three different test sets (t-FR, t-ES and t-ZH). The target language is always English. #P is the number of training pairs. The best overall for each language are in bold. gual datasets, we combine the respective bilingual datasets (e.g., FR→EN and ES→EN) and shuffle them. In order to ensure all languages share the same character vocabulary, we latinize the Chinese dataset using the Wubi encoding method, following . For testing, we use the original UN test sets provided for each pair. Tasks. Our experiments are designed as follows: (i) bilingual scenario, in which we train a model with a single input language; (ii) multilingual scenario, in which we input two or three languages at the same time without providing any language identifiers to the models or increasing their parameters. We test combining input languages that can be considered as more similar in terms of syntax and vocabulary (e.g. FR and ES) as well as more distant (e.g., ES and ZH). Model comparison. In Table 1, we compare the BLEU performance of different character-level architectures trained on the WMT dataset. For reference, we include the recurrent character-level model from , as well as transformers trained on the subword level, using a vocabulary of 50k byte-pair encoding (BPE) tokens. All models were trained on four Nvidia GTX 1080X GPUs for 20 epochs. We find character-level training to be 3 to 5 times slower than subword-level training, due to much longer sequence lengths. However, the standard transformer trained at the character-level already achieves very strong performance, outperforming the model from . Character-level transformers also perform competitively with equivalent BPE models while requiring up to 60% fewer parameters. Our convtransformer variant performs on par with the standard transformer on this dataset. Multilingual experiments. In Table 2, we report our BLEU on the UN dataset using the 6-layer transformer/convtransformer models. All models were trained for 30 epochs. Multilingual models are evaluated on translation from all possible input languages to English. The convtransformer consistently outperforms the transformer on this dataset, with a gap of up to 2.3 BLEU on bilingual translation (ZH→EN) and up to 2.6 BLEU on multilingual translation (FR+ZH→EN). Training multilingual models on similar input languages (FR + ES→EN) leads to improved performance for both languages, which is consistent with . Training on distant languages can surprisingly still be effective, for example, the models trained on FR+ZH→EN outperform the models trained just on FR→EN, however they perform worse than the bilingual models trained on ZH→EN. Thus, distant-language training seems only to be helpful when the input language is closer to the target translation language (which is English here). The convtransformer is about 30% slower to train than the transformer, however, as shown in Figure 2, the convtransformer reaches compara- ble performance in less than half of the number of epochs, leading to an overall training speedup compared to the transformer. To gain a better understanding of the multilingual models, we analyze their learned character alignments as inferred from the model attention probabilities. For each input language (e.g., FR), we compare the alignments learned by each of our multilingual models (e.g., FR + ES → EN model) to the alignments learned by the corresponding bilingual model (e.g., FR → EN). Our intuition is that the bilingual models have the greatest flexibility to learn high-quality alignments because they are not distracted by other input languages. Multilingual models, by contrast, might learn lower quality alignments because either (i) the architecture is not robust enough for multilingual training; or (ii) the languages are too dissimilar to allow for effective joint training, prompting the model to learn alternative alignment strategies to accommodate for all languages. We quantify the alignments using canonincal correlation analysis (CCA) . First, we sample 500 random sentences from each of our UN testing datasets (FR, ES, or ZH) and then produce alignment matrices by extracting the encoder-decoder attention from the last layer of each model. We use CCA to project each alignment matrix to a common vector space and infer the correlation. We conduct the analysis on our transformer and convtransformer models separately. Our are in Figure 3. For similar source and target languages (e.g., the FR+ES→EN model), we observe strong positive correlation to the bilingual models, indicating that alignments can be simultaneously learned. When introducing a distant source language (ZH) in the training, we observe a drop in correlation, for FR and ES, and an even bigger drop for ZH. This is in line with our BLEU from Section 5.1 suggesting that multilingual training of distant languages is more challenging. The convtransformer is more robust to the introduction of a distant language than the transformer (p < 0.005 for FR and ES inputs, according to a one-way ANOVA test). We performed a detailed investigation of the utility of self-attention models for character-level translation, testing the standard transformer architecture, as well as a novel variant augmented by convolution in the encoder to facilitate information propagation across characters. Our experiments show that self-attention performs very well on characterlevel translation, performing competitively with subword-level models, while requiring fewer parameters. Training on multiple input languages is also effective and leads to improvements across all languages when the source and target languages are similar. When the languages are different, we observe a drop in performance, in particular for the distant language. In future work, we will extend our analysis to include additional source and target languages from different language families, such as more Asian languages. We will also work towards improving the training efficiency of character-level models, which is one of their main bottlenecks. A Example model outputs Tables 3, 4 and 5 contain example translations produced by our different bilingual and multilingual models trained on the UN datasets. In Figures 4,5, 6 and 7 we plot example alignments produced by our different bilingual and multilingual models trained on the UN datasets, always testing on translation from FR to EN. The alignments are produced by extracting the encoderdecoder attention produced by the last decoder layer of our transformer/convtransformer models. We observe some patterns: (i) for bilingual translation (Figure 4), the convtransformer has a sharper weight distribution on the matching characters and words than the transformer; (ii) for multilingual translation of close languages (FR+ES→EN, Figure 5), both transformer and convtransformer are able to preserve the word alignments, but the alignments produced by the convtransformer appear to be slightly less noisy; (iii) for multilingual translation of distant languages (FR+ZH→EN, Figure 6), the character alignments of the transformer become visually much noisier and concentrated on a few individual chracters and many word alignments dissolve, while the convtransformer character alignments remain more spread out and word alignment is much better preserved. This is another indication that the convtransformer is more robust for multilingual translation of distant languages. (iv) for multilingual translation with three inputs, where two of the three languages are close (FR+ES+ZH→EN, Figure 7), we observe a similar pattern, with the word alignments being better preserved by the convtransformer. source Pour que ce cadre institutionnel soit efficace, il devra remédier aux lacunes en matière de réglementation et de mise en oeuvre qui caractérisentà ce jour la gouvernance dans le domaine du développement durable. reference For this institutional framework to be effective, it will need to fill the regulatory and implementation deficit that has thus far characterized governance in the area of sustainable development. FR→EN transformer To ensure that this institutional framework is effective, it will need to address regulatory and implementation gaps that characterize governance in sustainable development. convtransformer In order to ensure that this institutional framework is effective, it will have to address regulatory and implementation gaps that characterize governance in the area of sustainable development. To ensure that this institutional framework is effective, it will need to address gaps in regulatory and implementation that characterize governance in the area of sustainable development. convtransformer In order to ensure that this institutional framework is effective, it will be necessary to address regulatory and implementation gaps that characterize governance in sustainable development so far. To ensure that this institutional framework is effective, gaps in regulatory and implementation that have characterized governance in sustainable development to date. convtransformer For this institutional framework to be effective, it will need to address gaps in regulatory and implementation that characterize governance in the area of sustainable development. To ensure that this institutional framework is effective, it will need to address regulatory and implementation gaps that are characterized by governance in the area of sustainable development. convtransformer If this institutional framework is to be effective, it will need to address gaps in regulatory and implementation that are characterized by governance in the area of sustainable development. source Estamos convencidos de que el futuro de la humanidad en condiciones de seguridad, la coexistencia pacífica, la tolerancia y la reconciliación entre las naciones se verán reforzados por el reconocimiento de los hechos del pasado. reference We strongly believe that the secure future of humanity, peaceful coexistence, tolerance and reconciliation between nations will be reinforced by the acknowledgement of the past. ES→EN transformer We are convinced that the future of humanity in conditions of security, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by recognition of the facts of the past. convtransformer We are convinced that the future of humanity under conditions of safe, peaceful coexistence, tolerance and reconciliation among nations will be reinforced by the recognition of the facts of the past. We are convinced that the future of mankind under security, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. convtransformer We are convinced that the future of humanity in safety, peaceful coexistence, tolerance and reconciliation among nations will be reinforced by the recognition of the facts of the past. We are convinced that the future of humanity in safety, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. convtransformer We are convinced that the future of humanity in safety, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. We are convinced that the future of mankind in safety, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. convtransformer We are convinced that the future of mankind in security, peaceful coexistence, tolerance and reconciliation among nations will be strengthened by the recognition of the facts of the past. source ZH 利用专家管理农场对于最大限度提高生产率和灌溉水使用效率也是重要的。 source ZH tjh|et fny|pe tp|gj pei|fnrt cf|gf jb|dd bv|ya rj|ym tg|u|yx t iak|ivc|ii wgkq0|et uqt|yx bn j tgj|s r. reference EN The use of expert farm management is also important to maximize land productivity and efficiency in the use of irrigation water. ZH→EN transformer The use of expert management farms is also important for maximizing productivity and irrigation use. convtransformer The use of experts to manage farms is also important for maximizing efficiency in productivity and irrigation water use. The use of expert management farms is also important for maximizing productivity and efficiency in irrigation water use. convtransformer The use of expert management farms is also important for maximizing productivity and irrigation water efficiency. The use of expert farm management is also important for maximizing productivity and irrigation water use efficiency. convtransformer The use of expert management farms to maximize efficiency in productivity and irrigation water use is also important. The use of expert management farms is also important for maximizing productivity and irrigation water use. convtransformer It is also important that expert management farms be used to maximize efficiency in productivity and irrigation use.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BWlCpme3TS
We perform an in-depth investigation of the suitability of self-attention models for character-level neural machine translation.
The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice. Medical diagnostics have increasingly become a more interesting and viable endpoint for machine learning. A general scarcity of publicly available medical data, however, inhibits its rapid development. Pre-training on tangentially related datasets such as ImageNet BID4 ) has been shown to help in circumstances where training data is limited, but may introduce unintended biases which are undesirable in a clinical setting. Furthermore, most clinical settings will drive a need for models which can accurately predict a large number of diagnostic outcomes. This essentially turns many medical problems into multi-label classification with a large number of targets, many of which may be subtle or poorly defined and are likely to be inconsistently labeled. In addition, unlike the traditional multi-label setting, predicting the absence of each label is as important as predicting its presence in order to minimize the possibility of misdiagnosis. Each of these challenges drive a need for architectures which consider clinical context to make the most of the data available. Chest x-rays are the most common type of radiology exam in the world and a particularly challenging example of multi-label classification in medical diagnostics. Making up nearly 45% of all radiological studies, the chest x-ray has achieved global ubiquity as a low-cost screening tool for a wealth of pathologies including lung cancer, tuberculosis, and pneumonia. Each scan can contain dozens of patterns corresponding to hundreds of potential pathologies and can thus be difficult to interpret, suffering from high disagreement rates between radiologists and often ing in unnecessary follow-up procedures. Complex interactions between abnormal patterns frequently have significant clinical meaning that provides radiologists with additional context. For example, a study labeled to indicate the presence of cardiomegaly (enlargement of the cardiac silhouette) is more likely to additionally have pulmonary edema (abnormal fluid in the extravascular tissue of the lung) as the former may suggest left ventricular failure which often causes the latter. The presence of edema further predicates the possible presence of both consolidation (air space opacification) and a pleural effusion (abnormal fluid in the pleural space). Training a model to recognize the potential for these interdependencies could enable better prediction of pathologic outcomes across all categories while maximizing the data utilization and its statistical efficiency. Among the aforementioned challenges, this work firstly addresses the problem of predicting multiple labels simultaneously while taking into account their conditional dependencies during both the training and the inference. Similar problems have been raised and analyzed in the work of BID30 BID1 with the application of image tagging, both outside the medical context. The work of BID26 for chest x-ray annotations are closest to ours. All of them utilize out-of-the-box decoders based on recurrent neural networks (RNNs) to sequentially predict the labels. Such a naive adoption of RNNs is problematic and often fails to attend to peculiarities of the medical problem in their design, which we elaborate on in Section 2.3 and Section 3.3.1.In addition, we hypothesize that the need for pre-training may be safely removed when there are sufficient medical data available. To verify this, all our models are trained from scratch, without using any extra data from other domains. We directly compare our with those of that are pre-trained on ImageNet. Furthermore, to address the issue of clinical interpretability, we juxtapose a collection of alternative metrics along with those traditionally used in machine learning, all of which are reported in our benchmark. This work brings state-of-the-art machine learning models to bear on the problem of medical diagnosis with the hope that this will lead to better patient outcomes. We have advanced the existing research in three orthogonal directions:• This work experimentally verifies that without pre-training, a carefully designed baseline model that ignores the label dependencies is able to outperform the pre-trained state-ofthe-art by a large margin.• A collection of metrics is investigated for the purpose of establishing clinically relevant and interpretable benchmarks for automatic chest x-ray diagnosis.• We propose to explicitly exploit the conditional dependencies among abnormality labels for better diagnostic . Existing RNNs are purposely modified to accomplish such a goal. The on the proposed metrics consistently indicate their superiority over models that do not consider interdependencies. The present work is part of a recent effort to harness advances in Artificial Intelligence and machine learning to improve computer-assisted diagnosis in medicine. Over the past decades, the volume of clinical data in machine-readable form has grown, particularly in medical imaging. While previous generations of algorithms struggled to make effective use of this high-dimensional data, modern neural networks have excelled at such tasks. Having demonstrated their superiority in solving difficult problems involving natural images and videos, recent surveys from BID18; BID25; BID22 suggest that they are rapidly becoming the "de facto" standard for classification, detection, and segmentation tasks with input modalities such as CT, MRI, x-ray, and ultrasound. As further evidence, models based on neural networks dominate the leaderboard in most medical imaging challenges 1,2.Most successful applications of neural networks to medical images rely to a large extent on convolutional neural networks (ConvNets), which were first proposed in BID15. This comes as no surprise since ConvNets are the basis of the top performing models for natural image understanding. For abnormality detection and segmentation, the most popular variants are UNets from BID24 and VNets from BID20, both built on the idea of fully convolutional neural networks introduced in BID19. For classification, representative examples of neural network-based models from the medical literature include: BID5 for skin cancer classification, BID7 for diabetic retinopathy, BID13 for pulmonary tuberculosis detection in x-rays, and BID11 for lung cancer diagnosis with chest CTs. All of the examples above employed 2D or 3D ConvNets and all of them provably achieved near-human level performance in their particular setup. Our model employs a 2D ConvNet as an image encoder to process chest x-rays. Given a finite set of possible labels, the multi-label classification problem is to associate each instance with a subset of those labels. Being relevant to applications in many domains, a variety of models have been proposed in the literature. The simplest approach, known as binary relevance, is to break the multi-label classification problem into independent binary classification problems, one for each label. A recent example from the medical literature is. The appeal of binary relevance is its simplicity and the fact that it allows one to take advantage of a rich body of work on binary classification. However, it suffers from a potentially serious drawback: the assumption that the labels are independent. For many applications, such as the medical diagnostic application motivating this work, there are significant dependencies between labels that must be modeled appropriately in order to maximize the performance of the classifier. Researchers have sought to model inter-label dependencies by making predictions over the label power set (e.g. BID29 and BID23), by training classifiers with loss functions that implicitly represent dependencies (e.g. BID16), and by using a sequence of single-label classifiers, each of which receives a subset of the previous predictions along with the instance to be classified (e.g. BID3). The later approach is equivalent to factoring the joint distribution of labels using a product of conditional distributions. Recent research has favored recurrent neural networks (RNNs), which rely on their state variables to encode the relevant information from the previous predictions (e.g. BID30 and Chen et al. FORMULA0). The present work falls into this category. To detect and classify abnormalities in chest x-ray images, we propose using 2D ConvNets as encoders and decoders based on recurrent neural networks (RNNs). Recently, BID17 proposed an RNN-based model for abnormality classification that, based on the title of their paper, bears much resemblance to ours. However, in their work the RNN is used to process the inputs rather than the outputs, which fails to capture dependencies between labels; something we set out to explicitly address. They also deal exclusively with time series data rather than high-resolution images. The work of BID26 also addresses the problem of chest x-ray annotation. They built a cascaded three-stage model using 2D ConvNets and RNNs to sequentially annotate both the abnormalities and their attributes (such as location and severity). Their RNN decoder resembles ours in its functionality, but differs in the way the sequence of abnormalities are predicted. In each RNN step, their model predicts one of T abnormalities with softmax, and stops when reaching a predefined upper limit of total number of steps (5 is used in theirs). Instead, our model predicts the presence or absence of t-th abnormality with sigmoid at time step t and the total number of steps is the number of abnormalities. The choice of such a design is inspired by Neural Autoregressive Density Estimators (NADEs) of BID14. Being able to predict the absence of an abnormality and feed to the next step, which is not possible with softmax and argmax, is preferable in the clinical setting to avoid any per-class overcall and false alarm. In addition, the absence of a certain abnormality may be a strong indication of the presence or absence of others. Beyond having a distinct approach to decoding, their model was trained on the OpenI 3 dataset with 7000 images, which is smaller and less representative than the dataset that we used (see below). In addition, we propose a different set of metrics to use in place of BLEU BID21, commonly used in machine translation, for better clinical interpretation. In the non-medical setting, BID30 proposed a similar ConvNet-RNN architecture. Their choice of using an RNN decoder was also motivated by the desire to model label dependencies. However, they perform training and inference in the manner of BID26. Another example of this combination of application, architecture, and inference comes from BID1 whose work focused on eliminating the need to use a pre-defined label order for training. We show in the experiments that ordering does not seem to impose as a significant constraint when models are sufficiently trained. Finally, proposed a 2D ConvNet for classifying abnormalities in chest x-ray images. However, they used a simple binary relevance approach to predict the labels. As we mentioned earlier, there is strong clinical evidence to suggest that labels do in fact exhibit dependencies that we attempt to model. They also presented the largest public x-ray dataset to date ("ChestX-ray8"). Due to its careful curation and large volume, such a collection is a more realistic retrospective clinical study than OpenI and therefore better suited to developing and benchmarking models. Consequently, we use "ChestX-ray8" to train and evaluate our model. And it should be noted that unlike, we train our models from scratch to ensure that the image encoding best captures the features of x-ray images as opposed to natural images. The following notations are used throughout the paper. Denote x as an input image, and x ∈ R w×h×c where w, h and c represent width, height, and channel. Denote y as a binary vector of dimensionality T, the total number of abnormalities. We used superscripts to indicate a specific dimensionality. Thus, given a specific abnormality t, y t = 0 indicates its absence and y t = 1 its presence. We use subscripts to index a particular example, for instance, {x i, y i} is the i-th example. In addition, θ denotes the union of parameters in a model. We also use m to represent a vector with each element m t as the mean of a Bernoulli distribution. A recent variant of Convolutional Neural Network (ConvNet) is proposed in BID10, dubbed as Densely Connected Networks (DenseNet). As a direct extension of Deep Residual Networks BID8 and Highway Networks BID27, the key idea behind DenseNet is to establish shortcut connections from all pairs of layers at different depth of a very deep neural network. It has been argued in BID10 that, as the of the extensive and explicit feature reuse in DenseNets, they are both computationally and statistically more efficient. This property is particularly desirable in dealing with medical imaging problems where the number of training examples are usually limited and overfitting tends to prevail in models with more than tens of millions of parameters. We therefore propose a model based on the design of DenseNets while taking into account the peculiarity of medical problems at hand. Firstly, the inputs of the model are of much higher resolutions. Lower resolution, typically with 256 × 256, may be sufficient in dealing with problems related to natural images, photos and videos, a higher resolution, however, is often necessary to faithfully represent regions in images that are small and localized. Secondly, the proposed model is much smaller in network depth. While there is ample evidence suggesting the use of hundreds of layers, such models typically require hundreds of thousands to millions of examples to train. Large models are prone to overfitting with one tenth the training data. FIG0 highlights such a design. Ignoring the nature of conditional dependencies among the indicators, y t, one could establish the following probabilistic model: DISPLAYFORM0 Equ assumes that knowing one label does not provide any additional information about any other label. Therefore, in principle, one could build a separate model for each y t which do not share any parameters. However, it is common in the majority of multi-class settings to permit a certain degree of parameter sharing among individual classifiers, which encourages the learned features to be reused among them. Furthermore, sharing alleviates the effect of overfitting as the exampleparameter ratio is much higher. The ing encoded representation of the input is a vector that captures the higher-order semantics that are useful for the decoding task. K is the growth rate in BID10, S is the stride. We also include the filter and pooling dimensionality when applicable. Unlike a DenseNet that has 16 to 32 ConvBlock within a DenseBlock, our model uses 4 in order to keep the total number of parameters small. Our proposed RNN decoder is illustrated on the bottom right. During training, the model optimizes the following Maximum Log-likelihood Estimate (MLE) criteria: DISPLAYFORM0 where P (y t |x, θ) is a Bernoulli distribution with its mean m t parameterized by the model. In particular, m t = sigmoid(f (x, θ)). As labels are considered independent and during the inference, a binary label is generated for each factor independently with y t * = arg max P (y t |x, θ). This is equivalent to setting the classification threshold to 0.5. As discussed in length in Section 1, it is hardly true that abnormalities are independent from each other. Hence the assumption made by Equ is undoubtably too restrictive. In order to treat the multi-label problem in its full generality, we can begin with the following factorization, which makes no assumption of independence: DISPLAYFORM0 Here, the statistical dependencies among the indicators, y t, are explicitly modeled within each factor so the absence or the presence of a particular abnormality may suggest the absence or presence of others. The factorization in Equ has been the central study of many recent models. BID0 proposed the first neural network based model, refined by BID14 and BID6, all of which used the model in the context of unsupervised learning in small discrete data or small image patches. Recently BID28; BID2 popularized the so-called "sequence-to-sequence" model where a Recurrent Neural Network (RNN) decoder models precisely the same joint distribution while conditioned on the output of an encoder. Compared with the previous work, RNNs provide a more general framework to model Equ and an unmatched proficiency in capturing long term dependencies when K is large. We therefore adopt the Long-short Term Memory Networks (LSTM) BID9 and treat the multi-label classification as sequence prediction with a fixed length. The formulation of our LSTM is particularly similar to those used in image and video captioning BID32 BID33, but without the use of an attention mechanism and without the need of learning when to stop. Given an input x, the same DenseNet-based encoder of Section 3.2 is applied to produce a lower dimensional vector representation of it with x enc = f enc (x) For the decoder, x enc is used to initialize both the states and memory of an LSTM with DISPLAYFORM1 where f h0 and f c0 are standard feedforward neural networks with one hidden layer. With h 0 and c 0, the LSTM decoder is parameterized as g DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 where model parameters consist of three matrices Ws, Us, Vs, vectors bs and a scalar b l. y is a vector code of the ground truth labels that respects a fixed ordering, with each element being either 0 or 1. All the vectors, including hs, gs, cs, bs, q and y are row vectors such that the vector-matrix multiplication makes sense. denotes the element-wise multiplication. Both sigmoid and tanh are element-wise nonlinearities. For brevity, we summarize one step of decoder computation as m t = f dec (x enc, y t−1, h t−1) where the decoder LSTM computes sequentially the mean of a Bernoulli distribution. With Equ, each of its factor may be rewritten as DISPLAYFORM5 The choice of using sigmoid to predict y t is by design. Standard sequence-to-sequence models often use softmax to predict one out of T classes and thus need to learn explicitly an "end-ofsequence" class. This is not desirable in our context due to the sparseness of the labels, ing in the learned decoder being strongly biased towards predicting "end-of-sequence" while missing infrequently appearing abnormalities. Secondly, during the inference of the softmax based RNN decoder, the prediction at the current step is largely based on the presence of abnormalities at all previous steps due to the use of argmax. However, in the medical setting, the absence of previously predicted abnormalities may also be important. Sigmoid conveniently addresses these issues by explicitly predicting 0 or 1 at each step and it does not require the model to learn when to stop; the decoder always runs for the same number of steps as the total number of classes. FIG0 contains the overall architecture of the decoder. During training, the model optimizes DISPLAYFORM0 Compared with Equ, the difference is the explicit dependencies among y t s. One may also notice that such a factorization is not unique -in fact, there exist T! different orderings. Although mathematically equivalent, in practice, some of the orderings may in a model that is easier to train. We investigate in Section 4 the impact of such decisions with two distinct orderings. The inference of such a model is unfortunately intractable as y * = arg max y P (y 0, . . ., y T |x). Beam search BID28 is often used as an approximation. We have found in practice that greedy search, which is equivalent to beam search with size 1, in similar performance due to the binary sampling nature of each y t, and use it throughout the experiments. It is equivalent to setting 0.5 as the discretization threshold on each of the factors. To verify the efficacy of the proposed models in medical diagnosis, we conduct experiments on the dataset introduced in. It is to-date the largest collection of chest x-rays that is publicly available. It contains in total 112,120 frontal-view chest x-rays each of which is associated with the absence or presence of 14 abnormalities. The dataset is originally released in PNG format, with each image rescaled to 1024 × 1024.As there is no standard split for this dataset, we follow the guideline in to randomly split the entire dataset into 70% for training, 10% for validation and 20% for training. The authors of noticed insignificant performance difference with different random splits, as confirmed in our experiments by the observation that the performance on validation and test sets are consistent with each other. As the dataset is relatively new, the complete set of metrics have yet to be established. In this work, the following metrics are considered, and their advantage and drawbacks outlined below.1. Negative log-probability of the test set (NLL). This metric has a direct and intuitive probabilistic interpretation: The lower the NLL, the more likely the ground truth label. However, it is difficult to associate it with a clinical interpretation. Indeed, it does not directly reflect how accurate the model is at diagnosing cases with or without particular abnormalities.. This is the reported metric of and it is widely used in modern biostatistics to measure collectively the rate of true detections and false alarms. In particular, we define 4 quantities: FORMULA0 The ROC curve has typically horizontal axis as (1-specificity) and vertical axis as sensitivity. Once P (y i |x) is available, the curve is generated by varying the decision threshold to discretize the probability into either 0 or 1. Despite of its clinical relevance, P (y i |x) is intractable to compute with the model of Equ due to the need of marginalizing out other binary random variables. It is however straightforward to compute with the model of Equ due the independent factorization.3. DICE coefficient. As a similarity measure over two sets, DICE coefficient is formulated as DICE(y α, y β) = (2y α y β)/(y 2 α + y 2 β) = 2TP/(2TP + FP + FN) with the maxima at 1 when y α ≡ y β. Such a metric may be generalized in cases where y α is a predicted probability with y α = P (y|x) and y β is the binary-valued ground truth, as is used in image segmentation tasks such as in BID24; BID20. We adopt such a generalization as our models naturally output probabilities.4. Per-example sensitivity and specificity (PESS). The following formula is used to compute PESS DISPLAYFORM0 where N is the size of the test set. Notice that the computation of sensitivity and specificity requires a binary prediction vector. Therefore, without introducing any thresholding bias, we useŷ DISPLAYFORM1 5. Per-class sensitivity and specificity (PCSS). Unlike PESS, the following formula is used to compute PCSS DISPLAYFORM2 whereŷ t i follows the same threshold of 0.5 as in PESS. Unlike PCSS where the average is over examples, PCSS averages over abnormalities instead. Three types of models are tuned on the training set. We have found that data augmentation is crucial in combatting the overfitting in all of our experiments despite their relatively small size. In particular, the input image of resolution 512 × 512 is randomly translated in 4 directions by 25 pixels, randomly rotated from -15 to 15 degrees, and randomly scaled between 80% and 120%. Furthermore, the ADAM optimizer BID12 is used with an initial learning rate of 0.001 which is multiplied by 0.9 whenever the performance on the validation set does not improve during training. Early stop is applied when the performance on the validation set does not improve for 10,000 parameter updates. All the reported metrics are computed on the test set with models selected with the metric in question on the validation set. In order to ensure a fair comparison, we constrain all models to have roughly the same number of parameters. For model a, where labels are considered independent, a much higher network growth rate is used for the encoder. For model b1 and model b2 where LSTMs are used as decoders, the encoders are narrower. The exact configuration of three models is shown in TAB1. In addition, we investigate the effect of ordering in the factorization of Equ. In particular, model b1 sorts labels by their frequencies in the training set while model b1 orders them alphabetically. All models are trained with MLE with the weighted cross-entropy loss introduced in. All models are trained end-to-end from scratch, without any pre-training on ImageNet data. The AUC per abnormality is shown in TAB2, computed based on the marginal distribution of P (y|x). Only model a is included as such marginals are in general intractable for the other two due to the dependencies among y t s. In addition, Table 3 compares all three models based on the proposed metrics from Section 4.2. It can be observed that our baseline model significantly outperformed the previous state-of-the-art. According to Table 3, considering label dependencies brings significant benefits in all 4 metrics and the impact of ordering seems to be marginal when the model is sufficiently trained. Table 3: Test set performance on negative log-probability (NLL), DICE, per-example sensitivity (PESS) at a threshold 0.5 and per-class sensitivity and specificity (PCSS) at a threshold of 0.5. See Section 4.2 for explanations of the metrics. In addition to model a used in TAB2, model b1 and model b2 corresponds to the model introduced in Section 3.3, with the difference in the ordering of the factorization in Equ. model b1 sorts labels by their frequency in the training set in ascending order. As a comparison, model b2 orders labels alphabetically according to the name of the abnormality. To improve the quality of computer-assisted diagnosis of chest x-rays, we proposed a two-stage end-to-end neural network model that combines a densely connected image encoder with a recurrent neural network decoder. The first stage was chosen to address the challenges to learning presented by high-resolution medical images and limited training set sizes. The second stage was designed to allow the model to exploit statistical dependencies between labels in order to improve the accuracy of its predictions. Finally, the model was trained from scratch to ensure that the best application-specific features were captured. Our experiments have demonstrated both the feasibility and effectiveness of this approach. Indeed, our baseline model significantly outperformed the current state-of-the-art. The proposed set of metrics provides a meaningful quantification of this performance and will facilitate comparisons with future work. While a limited exploration into the value of learning interdependencies among labels yields promising , additional experimentation will be required to further explore the potential of this methodology both as it applies specifically to chest x-rays and to medical diagnostics as a whole. One potential concern with this approach is the risk of learning biased interdependencies from a limited training set which does not accurately represent a realistic distribution of pathologies -if every example of cardiomegaly is also one of cardiac failure, the model may learn to depend too much on the presence of other patterns such as edemas which do not always accompany enlargement of the cardiac silhouette. This risk is heightened when dealing with data labeled with a scheme which mixes pathologies, such as pneumonia, with patterns symptomatic of those pathologies, such as consolidation. The best approach to maximizing feature extraction and leveraging interdependencies among target labels likely entails training from data labeled with an ontology that inherently poses some consistent known relational structure. This will be the endpoint of a future study.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1uP7ebAW
we present the state-of-the-art results of using neural networks to diagnose chest x-rays
have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM). Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box. Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts. Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions. We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by. We further uncovered that the network paid attention to experimental artifacts. Removing these artifacts ensured the validity of predictions. After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%. Our work thus demonstrates the utility of AI explainability for CNNs. In the study by , a well-performing classifier allowed to correlate neural interventions with behavioral changes. Support Vector Machines (SVMs) were commonly applied to such classification tasks, relying on feature engineering by domain experts. In recent years, Convolutional Neural Networks (CNNs) have proven to reach high accuracies in classification tasks on images and videos reducing the need for manual feature engineering. introduced them in the 90s, CNNs had their break-through in the competition ILSVRC2012 with the architecture of. Since then, more and more sophisticated architectures have been designed enabling them to identify increasingly abstract features. This development has become possible due to the availability of larger training sets, computing resources, GPU training implementations, and better regularization techniques, such as Dropout; ). While these more complex deep neural network architectures achieved better , they also kept their learnt features hidden if not further analyzed. This caused CNNs to come with significant drawbacks: a lack of trust in their classifications, missing interpretability of learned features in the application domain, and the absence of hints as to what data could enhance performance . Explaining the decisions made by CNNs might even become a legal requirement in certain applications . In order to overcome these drawbacks, subsequent research has developed approaches to shed light on the inner workings of CNNs. These approaches have been successfully used for uncovering how CNNs might learn unintended spurious correlations, termed "Clever Hans" predictions . Such predictions could even become harmful if the predictions entailed decisions with severe consequences . Also, since deep neural networks have become a popular machine learning technique in applied domains, spurious correlations would undermine scientific discoveries. This paper focuses on zebrafish research as an applied domain of AI explainability, considering that the research community around this organism has grown immensely. The zebrafish is an excellent model organism for vertebrates, including humans, due to the following four reasons: The genetic codes of humans and zebrafish are about 70% orthologue . The fish are translucent which allows non-invasive observation of changes in the organism . Furthermore, zebrafish are relatively cheap to maintain, produce plenty of offspring, and develop rapidly. Finally, they are capable of recovering their brain structures within days after brain injury . In this paper, we adapt CNNs to work on highly controlled zebrafish video recordings and show the utility of a recently developed AI explainability technique on this task. We train the network on optical flow for binary classifying swim bouts and achieve superior performance when compared to the current state-of-the-art in bout classification . We then create heatmaps over the videos with the "iNNvestigate" toolbox which highlight the areas that our CNN pays attention to when making a prediction. The ing heatmaps show that our CNN learns reasonable features which are very different from those manually composed by. In the following, we will give an overview over relevant CNN architectures and approaches. Then, we will summarize existing AI explainability approaches focusing on attribution techniques. Finally, we highlight important studies of behavioral zebrafish research and give details of the study by. identified five relevant types of video architectures: CNN + LSTM (Long-Short-Term Memory), 3D-CNN , Two-Stream , 3D-Fused Two-Stream, and Two-Stream 3D-CNN. They differ in whether the convolutions are based on 2D or 3D kernels, whether optical flow is added, and how consecutive frames exchange information. Optical flow can be described as the horizontal and vertical displacement of a pixel from one frame to the next (Farnebäck ). Several algorithms for flow calculation exist, such as TV-L1 , Brox , and Farneback (Farnebäck ). Even novel deep learning approaches have been developed, e.g. FlowNet; ). initialized several CNN architectures with pre-trained weights and trained them on human action recognition datasets to show the benefits of transfer learning in video classifiers. Previous studies had shown this for images . Also, they found that adding a temporal stream based on optical flow always improved performance. This idea of a network with a spatial stream for appearance and a temporal stream for motion had been first developed by. AI explainability techniques. Current AI explainability techniques on images can be largely categorized into two types: attribution and feature visualization . Attribution relates regions of a sample image to activations of units in the CNN, while feature visualization uncovers what kinds of inputs strongly activate a particular unit. One of the earlier and quite successful attribution approaches is called Sensitivity Analysis ). It is based on the idea that if a single pixel were marginally changed, the prediction would worsen significantly for an important pixel, but only slightly for less important ones. and showed simple ways of producing approximate relevance heatmaps. By occluding parts of an image one by one and observing the change in activation, they could measure the influence of different image regions. This allowed them to check whether the CNN focused on the important objects of the image or performed its classification only based on contextual information. The technique applied in this paper is called Deep Taylor Decomposition ), which arose from Layer-Wise Relevance Propagation . It has been put into use with text , speech , and only once with video data . It highlights the areas in a sample image which the CNN deems most relevant for making a correct classification. We assume that the relevance of a pixel is determined by how much the classification accuracy would deteriorate if we altered this pixel. DTD distributes relevance from the output layer to the input layer by applying specific propagation rules for each layer. This approach equals a first-order Taylor decomposition of the output function. The authors argue that it yields a better approximation of relevance than Sensitivity Analysis. included this technique in their "iNNvestigate" toolbox, which we used in our work. Apart from attribution techniques, CNNs can be explained using direct feature visualization approaches. They try to find intuitive representations of the patterns a given unit responds to particularly strongly or weakly. Among these are deconvolution networks , which are closely related to Sensitivity Analysis, and optimization approaches in the input domain . Also, instead of creating an abstract representation of features, one can select specific samples which highly activate or suppress a particular unit. even went a step further by hiding irrelevant parts of the input image, similar to LIME . Behavioral zebrafish research. As we argue in Section 1, zebrafish are a highly suitable model organism for the human body. They serve as the object of study in many fields, such as wound repair , visual processing in the brain (; ; ;), cancer research , and genetic modifications . Especially in neuroscientific research, understanding behavior and behavioral changes in response to cerebral interventions is of high importance . Given that prey movements can be clearly distinguished from other types of movements, hypothesized that there must be a dedicated circuitry in the zebrafish brain. They found a pathway from retinal ganglion cells to an area called AF7 projecting to the optic tectum, the nucleus of the medial longitudinal fasciculus, and the hindbrain, which in turn produces the characteristic motor output. They verified their findings by ablating the AF7 neuropil and observing that lesioned fish failed to respond to prey stimuli with a movement that a trained SVM would classify as prey. They have identified the following five features as the most discriminating ones, ordered by descending importance: In our work, we make use of the dataset they gathered for training their SVM and compare our CNN to the features above. We trained a Two-Stream CNN to distinguish prey and spontaneous swim bouts of larval zebrafish with short video samples. For data preparation, we first extracted standardized snippets from raw videos and then performed augmentation by subsampling, flipping, and cropping. After training, we computed heatmaps showing the regions of high relevance within frames 1. Data pre-processing. We used the raw video files recorded by with a high-speed camera at 300 frames per second and labeled as either spontaneous (0/negative, 56.1%) or prey (1/positive, 43.9%) bout. The heads of the fish were embedded in a substance called agarose to keep them steady. We turned the videos into grayscale, normalized them, and kept a crop of size 256x256 pixels from each frame, such that the right-most part of the bladder was central on the left, as shown in Figure 1. We did not include the head of the fish in the crop, because the eyes would give away information about the type of bout . More details in Appendix B. For correct centering we implemented a gamma correction with γ = exp (−skewness /param), (where param was tweaked to maximize detection performance (here param = 4.3)) and applied a binary threshold at value 3 to separate the bladder from the rest of the fish. Since the eyes might also fall under this threshold, we declared the right-most contour the bladder. Each raw video contained several bout events which we extracted as sequences of 150 frames ing in 1,214 video snippets. Each sequence started 15 frames before the actual bout motion was detected. Extension to the pre-processing procedure. After training and heatmap analysis, we found that the trained CNN had found a shortcut for classification by using agarose motion instead of tail features. We therefore extended our pre-processing procedure by setting squares of 85x85 pixels in the upper and lower left-hand corners of all frames plain white. While this cut away some of the tail tip in rare occasions, it ensured that bouts could not be classified based on the agarose motion anymore. Data augmentation. Our data augmentation procedure worked with batches of 32 videos with the original data randomly shuffled, allowing a decent approximation of the gradient during training with some inherent noise, which is generally desirable to avoid falling into sharp minima . The procedure performed three augmentation steps: first subsampling, then flipping, and cropping, which achieved augmentation factors of 8, 2, and 9 respectively, totaling 174,816 augmented samples. We decided to compute optical flow after subsampling, not before, in order to create differing augmented samples. All samples of an augmented batch were derived from different original videos to ensure a good gradient approximation during training. In the subsampling step, our algorithm randomly selected 86 out of 150 frames under the constraint that between two frames, no more than 2 frames may be omitted. This was to ensure meaningful flow calculation in the subsequent step, because the tail could move quite fast. After subsampling, for each video our procedure selected one of the 86 frames to be the input for the spatial network. Then we computed the optical flow ing in 85 flow frames with one x and y component each. They were stacked to 170 channels, alternating x and y. We used the optical flow algorithm by Farnebäck with parameters 2 detecting flow even when the tail moved fast. The procedure then generated 18 augmented batches from each subsampled batch by randomly flipping vertically CNN architecture and framework. Just like, we used a twostream network with an adapted CNN-M-2048 network for each stream, as depicted in Figure 2. As shown by , this network can deal with a small sample size and learns quickly. The spatial stream had one gray-scale channel and the temporal stream 170 flow channels in the first layer. After obtaining the predicted probabilities of each stream by calculating the log-softmax of the individual two outputs, they were fused by averaging. We computed the negative log-likelihood loss of this average, which can be interpreted as the joint log-probability of both streams, assuming their statistical independence. Our dataset was made up of 38 files, with 28, 4, and 6 files for training, validation, and test sets respectively. The individual sets were not shuffled during training in order to allow sequential reads, which might have decreased training time. Notwithstanding, batches consisted of random samples due to our augmentation procedure explained above. Initialization of weights. We initialized both streams with weights pre-trained on ImageNet 3. This has become a common initialization strategy in various classification tasks (; ;), due to the utility of general features learnt from natural images, such as edge detection. While did not pre-train flow, found a pre-trained temporal stream to reach superior performance. With this initialization we hoped that training would require only fine-tuning and therefore less data and fewer epochs. Specifically, for the weights of the input layer of the spatial stream we took the average of the pretrained weights of 3 RGB-channels to get the weights for 1 grayscale-channel. For the temporal stream we copied the RGB-channels 56 2 3 times to get 170 channels, and added uniform random noise to all of them. This was to ensure that the channels evolved differently during training and should aid learning. Regarding outputs, we averaged the pre-trained weights of 500 units on the output layer to obtain the weights of two output neurons, because we dealt with 2 classes instead of 1,000 as in ImageNet. Training procedure. We made use of the Adam optimizer with standard settings and tuned its learning rate and weight decay coefficient -the neural network equivalent of L2-regularization . Furthermore, we added a gamma learning rate scheduler which) every epoch. Our training framework computed accuracy on the validation set after each epoch to estimate generalization performance. Since we were initializing our CNN with weights learned from quite a different domain, fine-tuning was crucial. In particular, we performed a hyperparameter search over learning rate ({1e-3, 1e-4, 1e-5}) and weight decay ({1e-2, 1e-3, 1e-4}) using a smaller dataset and training for about 8 epochs. With the best hyperparameters we trained the CNN on the full dataset for 5 epochs. Relevance analysis with heatmaps. Making our CNN more transparent required an AI explainability technique which would be well interpretable in the optical flow domain of our temporal stream. We expected feature visualization approaches and was conveniently accessible in the "iNNvestigate" toolbox . For validation purposes, we additionally generated saliency maps ) and heatmaps from Guided BackProp . The produced relevance heatmaps could be expected to give clues about what specific regions of optical flow, within a frame and across frames, the network was paying attention to. Also, we simplified the analysis by splitting the network into its individual streams. This was possible because no weights were learned after the final layer of each stream. Once the network was initialized correctly, "iNNvestigate" made the generation of heatmaps surprisingly simple. Also, with about 50 minutes for the whole analysis it was quite fast even on CPU, because it effectively only needed one forward and backward pass per sample. We used a small dataset of 3,420 samples for analysis by setting the subsampling factor from 8 to 1, in order to simplify the process. We fine-tuned the training of our CNN to reach high accuracies in distinguishing prey bouts of larval zebrafish from spontaneous swims. Subsequently, we analyzed the learned weights by computing and averaging the relevance heatmaps of all samples grouped by class. We further define prey bouts as positive and spontaneous swims as negative. We performed a small hyperparameter search over learning rate and weight decay, which proved sufficient because all models were initialized with pre-trained weights from ImageNet. For our baseline SVM -detailed in Appendix A -we report the 5-fold cross-validated accuracy and the final accuracy on the held-out test set. The hyperparameters 4 agree with the ones found by. We further present the accuracies of the CNN used for heatmap analysis, as well as the best CNNs before and after removal of experimental artifacts on training, validation, and test sets in Table 1. For the test set we additionally report individual accuracies of the spatial and temporal streams. We highlight that the final CNN attains a test accuracy of 96.32%, which is 6.12% points better than the baseline. Relevance heatmaps. We used relevance heatmaps to visualize the regions the CNN pays most attention to when classifying a specific sample. We computed relevance averages across samples and frames, as well as split by class in Figure 3 for more comprehensive insights into the features learned by our CNN. Similar from other explainability techniques can be found in Appendix C. As expected, the heatmaps exhibit the checkerboard artifacts typical of kernels with stride 2 in the first convolutional layer . Steadiness of the fish's trunk as differentiating feature. First and foremost, the heatmaps show that our CNN is able to differentiate the movements of zebrafish based on their characteristic motion. Relevance is highly concentrated at the trunk, i.e. the rostral part of the tail, for both temporal and spatial stream. We observe a very sharp relevance pattern along the edges of the tail. This indicates that the pre-trained weights helped the network to look for edges. The CNN pays little to no attention to the end of the tail even though it is visible in most frames. Instead, it makes positive classifications by looking at the very start of the trunk. The heatmaps suggest that a calm and steady trunk indicates a prey bout. As for the negatives, the spread out relevance pattern reflects the high frequency of tail deflections typical of spontaneous bouts, which had been identified by before. This makes clear that the network is able to differentiate the movements of zebrafish to high accuracy based on their characteristic motion. "Clever Hans" predictions. CNNs are incredibly powerful at finding any kinds of correlations in the input data even if they are not related to the object of interest. have termed such spurious correlations "Clever Hans" predictions, because the model bases its prediction not on what we want it to focus on, but some unintended artifacts in the data. Figure 3 shows clearly that our CNN bases a significant number of its negative responses mainly on motion in the top left corner and focuses little on tail appearance and motion. This plays a role only in negative classifications and only in the temporal stream. While the heatmaps are vertically symmetric, as we would expect due to vertical flipping during augmentation, this is not true for the peculiar region in the top left corner. Figure 3 depicts the averaged heatmap after removing the artifacts in the top and bottom left hand corners and retraining our CNN. Relevance is now entirely focused on the tail. Relevance distribution across frames. Most relevance is concentrated on the frames in the range 7-46, as depicted in Figure 4. The first seven frames are of least importance. This is very likely because our pre-processing procedure added a buffer of 15 frames before each bout, suggesting that the network focuses on the range of frames which are in fact the most relevant ones. This further supports the hypothesis that our CNN is able to differentiate zebrafish movements based on their characteristic motion patterns. We trained a two-stream Convolutional Neural Network (CNN) on recordings of larval zebrafish to classify prey and spontaneous swim bouts. We then visualized the learned weights by generating relevance heatmaps showing which regions of the input the network focuses on while performing its classifications. We find that our CNN is capable of learning highly discriminating tail features. These features seem to be quite different from the ones used in the SVM classification by -the previous state-of-the-art in bout classification. The heatmaps further uncovered a "Clever Hans" type of correlation. After removing this spurious correlation and retraining the network, the network reached a test accuracy of 96.32%, which is 6.12% points better than the accuracy achieved by. Judging from the test accuracy, our CNN has learned better discriminating features than those used for the SVM by , and has thus beaten manual feature engineering in this application domain. Steadiness of the fish's trunk as differentiating feature. The relevance heatmaps and high accuracy show that the network achieves correct classifications by looking for salient features in the trunk of the tail while largely disregarding the tip. A sharp and clear relevance profile confined to the edges of the trunk gives a clear sign of a prey bout. The opposite speaks for a spontaneous bout. Here, attention spreads out to capture the strong vertical oscillation of the trunk. For this reason we conclude that the CNN makes its predictions based on the steadiness of the trunk. We believe our interpretation of learned features to be in line with existing research on the kinematics of prey bouts. As shown by and , prey bouts require fine control of the tail's axial kinematics to perform precise swim movements. Zebrafish noticeably reduce their yaw rotation and stabilize the positioning of their head to make a targeted move at their prey. Such precise movements are not required in spontaneous swim bouts. The heatmaps indicate that the network has found clear evidence for these kinds of motion in the trunk of the tail. Furthermore, we argue that the CNN has learned features which are very different from the ones identified by. All of their features -as outlined in Section 2 -, except the second one, rely on information from the tip of the tail and a complete sequence of frames. However, many optical flow frames do not depict the tip of the tail because of its small size and high speed. This might have happened due to suboptimal parameter settings which could not handle the sometimes long distances which the tip traveled between frames. Also, subsamples include only 85 of the original 150 frames for each video. Due to its higher performance, we conclude not only that the CNN has learned a different set of features, but also that these features must bear higher discriminative power. Origin of the "Clever Hans" correlation. The telltale motion in the top left corner stems from a substance called agarose, which the fish's head was embedded in to keep it steady. It is quite curious that, while not visible to human eyes, the agarose seems to be moving each time the fish performed a spontaneous swim bout, but not so for a prey bout. We speculate that this correlation was unintentionally introduced by the experimenters who might have tapped the petri dish to induce the fish to perform a spontaneous swim bout. Future work. Calculating and storing optical flow is expensive. If we attained similar performance on original frames, training would be considerably cheaper. While we can confirm the findings by that the spatial stream by itself reaches a fairly competitive accuracy, it provides only very minor improvement to the overall network. Yet, this stream is probably looking for very similar features as the temporal stream, because it focuses largely on the upper half of the tail, just like the temporal stream. If that is the case, we should see improved performance when giving the spatial stream a sequence of frames. It should be interesting to probe whether the spatial stream could then match or even surpass the performance of the temporal stream. Furthermore, CNNs such as the one used in this paper could be used to investigate brain recovery in larval zebrafish. It has been shown on a cellular level that zebrafish can heal their brain within days after a lesion. However, this needs to be proven on a behavioral level . Future work could perform a lesion study on the optic tectum in zebrafish , a brain region responsible for translating visual input into motor output. CNNs could then assess swim bouts of recovered fish and give a measure for potential behavioral changes. Insights from relevance heatmaps would be required if the CNN were not able not distinguish recovered fish from healthy ones. For each frame in the 1,214 videos, we applied the tail-fitting code developed by to compute points along the tail, as depicted in Figure S5. We initialized their procedure central and 8 pixels from the left edge, because after pre-processing we could assume this to be just next to the right end of the bladder. Some of the videos contained frames which the tail-fitting code had problems processing, possibly because the pre-processing procedure cut off the tip of the tail in these instances. This ed in 953 correctly processed videos, including 482 (50.6%) spontaneous and 471 (49.4%) prey bouts. We performed no augmentation here because this would not have benefited the SVM. The feature extraction and model fitting algorithm then split the set into 85% training and 15% held-out test sets. have identified 5 key features which allowed their SVM to achieve a cross-validation accuracy of 96%. They did not report on a held-out test set. We used their provided code to extract these features. Then we performed a grid-search to tune SVM-kernel, γ, and C 5. Just like we used stratified 5-fold cross-validation . Figure S6 gives an overview over the whole project pipeline. The figure includes a depiction of the raw data input, pre-processing, augmentation, CNN and SVM training, and heatmap analysis. Figure S7 summarizes the data augmentation procedure. All scripts worked with a seed of 462019. We used the openly available distributions of NumPy 1. 16.4 (van der), 5 RBF-kernel, γ ∈ {1e-1, 1e-2, 1e-3, 1e-4}, C ∈ {0.01, 0.1, 1, 10}; linear-kernel, C ∈ {0.01, 0.1, 1, 10}. Matplotlib 3.1.1 , tqdm 4.32.2 (da), OpenCV 4.1.0.25 , scikit-learn 0.21.2 , PyTorch 1.1.0 , h5py 2.9.0 , TensorFlow 1.14.0 , Keras 2.2.4 , and iNNvestigate 1.0.8 . Pre-processing. We aimed to center the fish's bladder on the left of each cropped frame. To achieve this, we applied a binary threshold at value 3, because after normalization and gamma correction the pixels of the bladder were separated mostly below this value. While the fish was quite light, the eyes as the second darkest part of the fish might still have fallen under this threshold. Therefore, we first detected all contours to discard tiny contours (< 0.01% of the whole frame) and then kept only the right-most contour. Since this had to be the bladder now, we could get the crop dimensions using the right-most pixel of that contour. Each raw video mainly consisted of a still fish interspersed with a few short bouts. These were the events we extracted into consecutive 150 frames each. The idea was to detect motion by checking the percentage of pixel value changes from one frame to the next, considering only the tail. We omitted pixels other than the tail with a simple binary threshold at value 200. The pixels had to change in our case at least 0.38% of the entire pixel range (height × width × 255) in order for motion to be detected. If the algorithm detected motion for a certain number of consecutive frames, it set this as the start of an event. Also, it added a preceding buffer of 15 frames. The end was set 150 frames from the start. If no motion was detected, we automatically took the first frame as a start. We had to take care that extracted videos did not overlap, i.e. in part contained identical frames, even if there were two or more distinct movements contained within 150 frames. This might have otherwise lead to train/test-contamination. Therefore, we discarded any detected motions which fell in the range of a previous video. One special case we did not take into account was when the start was less than 150 frames from the end of the file. We shifted the start back by just enough frames to fit it in, but this might have made it overlap with a previous video. Since this case was probably very rare and not detrimental, we have kept the code as it was. Data augmentation. We parallelized data augmentation with 38 workers, because optical flow calculation took roughly 14 seconds per subsample. We used a Dell PowerEdge R815 with four 16 core Opteron CPUs, 256 GB memory, and 4 TB of disk space (HDD). Furthermore, the ing hdf5-files were compressed with gzip-compression at maximum compression level. This outperformed lzf-copmression by a factor of 1.76 when comparing training times. This advantage can be ascribed to heavy parallelization of decompression and relatively low transfer speeds between hard drive and memory. Training procedure. We implemented PyTorch's Dataset module to make use of the multiprocessing capabilities of the DataLoader class on servers with 16-32 CPU cores and 4 GPUs (NVIDIA GTX1060 6 GB) each. This was necessary to achieve manageable epoch times. Relevance analysis with heatmaps. The toolbox "iNNvestigate" ) for analyzing the trained weights only supported Keras with the TensorFlow-backend. Therefore, we re-implemented the exact structure of our CNN and initialized it with the extracted weights from PyTorch. While the conversion could have been done with tools like ONNX, after a few unsuccessful attempts we transported the weights with a custom Python script. A caveat to the "iNNvestigate" toolbox emerged after heatmap generation: it had problems analyzing 1,578 of the 3,420 samples. These produced an empty output. We made sure the problematic samples did not follow any systematic pattern by checking the indices of correctly analyzed samples, the ratio of true positives and negatives and false positives and negatives, as well as the distribution of classification confidence after the analysis. Since all numbers were the same as before the analysis, we continued with 1,842 samples for further investigation. We generated saliency maps ) and Guided BackProp heatmaps analogous to the relevance heatmaps in Figures S8 and S9 for comparison with the more recent technique of DTD. It becomes apparent that these other two techniques allow similar insights, although slightly fuzzier. Importantly, they also uncover the "Clever Hans" prediction. Here, we depict the ten most informative consecutive flow frames of the single most confident true positive (Figure S10), true negative (Figure S11), false positive (Figure S12), and false negative (Figure S13) sample. Figure S15 summarizes the spatial heatmaps of the same samples. Moreover, we gather five particularly informative flow frames in Figure S14 and four spatial heatmaps in Figure S15. We performed a confidence analysis on the heatmaps which depicted the "Clever Hans" feature of agarose motion in the top left corner. We sorted all negative classifications by increasing confidence calculated as − log(log(P) /log(P)) for each sample. We then grouped and averaged the heatmaps over windows of 104 samples, as shown in Figure S16. The analysis uncovered that the more confident a negative classification, the more the CNN relied on tail features. This in turn indicated that the CNN was able to learn actual features on the tail and did not entirely rely on agarose motion. Also, it suggested that tail features were a better predictor than agarose motion.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgQkT4twH
We demonstrate the utility of a recent AI explainability technique by visualizing the learned features of a CNN trained on binary classification of zebrafish movements.
When communicating, humans rely on internally-consistent language representations. That is, as speakers, we expect listeners to behave the same way we do when we listen. This work proposes several methods for encouraging such internal consistency in dialog agents in an emergent communication setting. We consider two hypotheses about the effect of internal-consistency constraints: 1) that they improve agents’ ability to refer to unseen referents, and 2) that they improve agents’ ability to generalize across communicative roles (e.g. performing as a speaker de- spite only being trained as a listener). While we do not find evidence in favor of the former, our show significant support for the latter. Emergent communication is the study of how linguistic protocols evolve when agents are tasked to cooperate. For example, agents engaged in a simple object retrieval task learn to communicate with one another in order to get the items they want. To date, work of this type has each agent assume a conversational role. Thus, agents are often trained only to speak or only to listen, or similarily trained to speak using a vocabulary disjoint from the vocabulary it is understands as a listener-e.g. speaking only to ask questions ("what color?") and listening only to comprehend the answer ("blue") ). These assumptions are misaligned with how we think about human communication, and with the way we'd like computational models to work in practice. As humans, not only can we easily shift between roles, we also know that there is inherent symmetry between these roles: we expect others to speak (or listen) similarly to the way we do, and we know that others expect the same of us. We test if dialog agents that incorporate the symmetry between themselves and their communicative partners learn more generalizable representations than those which do not. We introduce three modifications to the agents to encourage that they abide by the "golden rule": speak/listen as you would want to be spoken/listened to. Specifically, these modifications include self-play training objectives, shared embedding spaces, and symmetric decoding and encoding mechanisms that share parameters. We test two hypotheses about the effect of the proposed modifications on emergent communication: 1. Internal-consistency constraints improve agents' ability to generalize to unseen items-e.g. training on "red square" and "blue circle" and then testing on "blue square". 2. Internal-consistency constraints improve agents' ability to generalize across communicative roles-e.g. training on "blue" as a listener, and using "blue" as a speaker when testing. We evaluate the effect of each of the proposed modifications with two reference game datasets and two model architectures, an RNN model used by and a Transformer model. We find no evidence to support that internal-consistency improves generalization to unseen items (Hypothesis 1), but significant evidence that these proposed constraints enable models to generalize learned representations across communicative roles (Hypothesis 2), even in the case of where the agent receives no direct training in the target (test) role. All of our code and data are available at bit.ly/internal-consistency-emergent-communication. Notation. The space of possible references is parameterized by the number of attributes n f that describe each item (e.g. color) and the number of values n v each attribute can take (e.g.{red, blue}). Each item o is a bag-of-features vector o P t0, 1u N where N " n f¨nv. Each index o i is 1 if o expresses the ith feature value. The speaker produces a message with symbols from a vocabulary V with length L. For comparison, we use the best-performing setting |V| " 100 and L " 10 from previous work. Symbols in V are represented as 1-hot vectors. In each round of the reference game, we construct xC, r, ry where C is the context (set of item column vectors stacked into a matrix), r is a vector representing the referent, and r is the index of the referent in C. We uniformly sample k´1 items as distractors to form C " to 1,... o k´1 uYtru. The distractors are is sampled randomly each round (in every epoch). We begin with a general architecture and training objective to underly all of our models (Sections 3.1 and 3.2). We then introduce three modifications which can be used to encourage internallyconsistent representations: a self-play training objective, a shared embedding space, and a symmetric decoding and encoding mechanism with shared parameters (Section 3.3) 1. Agents contain four modules. Embedding modules 1) E item P R NˆD, E message P R |V|ˆD. E˚pxq embed items and messages. When speaking, the decoder module 2) consumes the embedded referent E item prq and produces a discrete message M P V L. Next, when listening, the encoder module 3) consumes embedded messages E message pM q P R WˆD and then produces a representation of the referentr P R D. Finally, a non-parametric pointing module 4) produces a distribution P pCq over the context by matrix multiplyingr with the embedded context E item pCq. The decoders emit one symbol at a time, auto-regressively producing fixed-length messages. The messages are discretized with the straight-through Gumbel Softmax as in. This converts a distribution to a one-hot vector while permitting gradients to flow through the operation and enables the agents to be optimized without reinforcement methods. The Recurrent model uses a LSTM decoder when speaking and a LSTM encoder when listening, as in. The Transformer model uses a Transformer Decoder when speaking and a Transformer Encoder to encode when listening. See Appendix A for implementation details and hyperparameters. Speaking S: r, θ Ñ M and listening L: C, M, θ Ñ P pCq are both functions where Lˆ|V| is a discrete-valued message with length L, P pCq is a distribution over the items in context, and θ are optimizable parameters. We optimize the parameters θ A, θ B of agents A, B over a dataset D to select each referent r from among the distractors in its context C by minimizing the negative log likelihood of selecting the correct referent in context (Eq 1). In our experiments, the speaker modules and the listener modules instantiate the function S and L respectively. We investigate three internal-consistency constraints, that encourage internally-consistent representations. Baseline agents consist of two separate sets of parameters, one for listening and one for speaking. For example, the baseline recurrent model consists of two recurrent models. This corresponds to the scenario where agents either speak or listen, but not both. We introduce a 1) self-play loss for both agents of the same form as Eq. 1, except the given agent fulfills both roles, encouraging it to speak/listen to others the same way it speaks/listens to itself. When we use the self-play training objective, we use it for both agents. Next, 2) shared embedding agents use the same item embeddings and the same message embeddings when listening and speaking. Finally, 3) symmetric encoding and decoding agents use the same parameters (but different mechanisms) to decode a message when speaking as it does to encode a message when listening. Parameters are only ever shared between roles within an agent, and never between agents. Our evaluation is based on the simple reference game 2 as described in Section 2, played across two datasets. The datasets, summarized in Table 1, target different aspects of lexical reference. The first, Visual Attributes for Concepts (CONCEPTS) , is derived from annotated images of actual items (animals, tools, etc). Thus, the data contains realistic co-occurance patterns: groups of attributes like has-head, has-mouth, and has-snout appears together several times, whereas has-seeds, has-mouth, made-of-metal never co-occur. The intuition is that a good lexicon will exploit the structure by developing words which refer to groups of frequently co-occurring attributes (e.g. "mammal") and will describe unseen referents in terms of these primitive concepts. The second dataset, SHAPES, is one we create ourselves in contrast with the CONCEPTS data. In SHAPES, items correspond to abstract shapes which are exactly describable by their attributes (e.g. blue, shaded, hexagon). All the attributes are independent and there is no co-occurence structure to exploit, so a good lexicon should ideally provide a means for uniquely specifying each attribute's value independently of the others. We present experimental aimed at testing the hypotheses stated in Section 1. To provide intuition, we frame experiments in terms of two agents, "Alice" (Agent A) and "Bob" (Agent B), who are taking turns asking for and fetching toys. We first test whether any of the proposed internal-consistency constraints improve the agents' ability to generalize to novel items-i.e. items which consist of unseen combinations of features. Here, we focus on the performance when models are trained and tested within the same communicative role. This corresponds to the setting that has typically been used in prior work on emergent communication: Alice always speaks in order to ask for toys, Bob always responds by fetching them, and the pair's success is evaluated in terms of Alice's ability to describe new toys such that Bob correctly gets them. For evaluation, we hold out a subset of the value combinations from each dataset to use for testing. For example, the agents might be trained to refer to trshape:circle, color:reds, rshape:square, color:bluesu and then tested on its ability to refer to rshape:circle, color:blues. We compute validation and test accuracies. Table 2: Results when agents are trained and tested in a single role, before any internal-consistency constraints. These scores are mean accuracy with 95% confidence range averaged over 5 runs over different test set samplings (the distractors change). one role, they can still impose internally consistent behavior across both roles. It is conceivable that doing so might improve performance even though each agent remains in a fixed role, either by providing the model with additional information about the task structure, or simply by acting as a regularizer. Thus, for completeness, we assess whether the internal-consistency constraints provide any advantage to the models in the vanilla emergent communication setting. Table 3 shows the effect of adding the self-play objective in the fixed-role setting, across architectures and datasets. The trends are mixed: it appears the additional signal only noises the baseline and symmetric models, whereas the shared embeddings models are able to leverage it effectively. Thus, the effect is not clear enough to establish conclusively that the internal-consistency constraints help the agents generalize in this fixed-role setting, and in fact it may hurt. Table 3: Performance on task of referring to/fetching unseen items for baseline model compared against models with the internal-consistency constraints. To highlight the difference of each constraint compared to the baseline performance, each delta compares the performance of the modified model to the baseline model. In this setting, we see no clear advantage to enforcing internalconsistency via self-play. These scores are mean accuracy with 95% confidence interval averaged over 5 runs over different test set samplings (the distractors change). We now look at whether internal-consistency improves the agents' ability to generalize linguistic knowledge across roles. For example, we can picture the following scenario: Alice is speaking to Bob, and asks for the "truck". Bob hands her the doll, and Alice replies negatively, indicating that what she actually wanted was the truck. Now, without additional direct supervision, when Bob wants the truck, will he know do use the word "truck"? Such a setting is particularly relevant in practical settings, for example when robotic agents must reach high accuracy despite only limited access to human interaction for training. We consider two versions of this setting, involving different levels of direct supervision (i.e. interaction with the other agent) as described below. Training in one role. Our first experimental setting assumes that Alice and Bob each only receive direct training in one role, e.g. Alice only ever speaks to Bob, so Alice only receives feedback on how she is performing as a speaker, and Bob on how he is performing as a listener. However, both Alice and Bob are able to practice in the opposite role via self-play. This setup is analogous to the experiment just discussed in Section 5.1.3. However, unlike before, Alice and Bob will be tested in the roles opposite of those in which they were trained. That is, if Alice was trained as a speaker, then she will be tested as a listener (on her ability to correctly identify items to which Bob refers). Training in both roles. In our second experimental setting, we assume Alice and Bob enjoy a healthy friendship, in which both take turns speaking and listening to each other, and thus both receive direct supervision in both roles. However, they do not necessarily receive equal training on every vocabulary item. Rather, there are some contexts in which Alice only speaks and other contexts in which she only listens. Intuitively, this corresponds to a scenario in which Alice speaks exclusively about food (while Bob listens), while Bob speaks exclusively about toys (while Alice listens). We are interested in testing how well Alice is able to speak about toys at test time. We use the SHAPES dataset 4 to create two training splits, each having the same attributes but covering disjoint sets of values. For example, the first training split (train-1) might have colorP{blue, red, yellow} whereas the second training split (train-2) has colorP{green, orange, purple}. We use train-1 to train Alice as speaker and Bob as listener and train-2 to train them in the reverse roles. We then report performance with Alice as listener and Bob as speaker using a test set that uses the same attribute values as train-1. Our for both training conditions are shown in Table 4. The baseline model (which includes no internal-consistency constraints) performs, unsurprisingly, at chance. Adding the self-play objective gives improvements across the board. Again, while seemingly straight forward, this has promising practical interpretations in settings in which a model has access to only a small amount of interaction. For example, a human may be willing to train a robot via speaking (pointing and naming items), but not patient enough to train it via listening (responding to the robot's noisy commands). In such a setting, the ability to massively augment performance via self-play is significant. In addition to the self-play objective, we see that enforcing shared-embedding spaces yields further significant performance gains (in the range of 30 percentage points in some cases). The symmetric constraints on top of self-play and shared embeddings seem to hurt performance in general. Baseline`Self-play`Shared Emb.`Symmetric Table 4: Performance for tasks that requires agents to generalize across roles-e.g. training on the word "blue" as a listener, but then having to produce "blue" as a speaker. " One Role" refers to when agents receive direct feedback in a single role (i.e. their training on the other roles is only via self-play). " Both Roles" refers to when agents receive direct feedback in both roles, but only see the test vocabulary in the role opposite that in which they are tested. To inspect the additive differences between the internal-consistency constraints, each delta compares the performance of the current column to the previous column. These scores are mean accuracy with 95% confidence range averaged over 5 with different test set samplings (the distractors change). Overall, when agents can be trained directly in the role in which they are tested, there is no clear evidence that adding internal-consistency constraints improves the ability of agents to generalize to new items. However, internal-consistency constraints improve performance significantly when agents have limited ability to train in a given role. Specifically, models which are equipped with selfplay training objectives and shared embedding spaces show superior ability to generalize learned representations across roles, and perform about as well as if they had been trained on the target role. In this section we provide additional analyses to highlight the effects of internal-consistency (in particular, self-play) on training efficiency and on the emerged protocol. Here, we use a smaller SHAPES dataset (see B.1), and reduce the vocabulary size and message length (|V| " 10, L " 3). Here we inspect if self-play supplants direct supervision between agents. We consider the setting in which Alice trains with full data as a speaker, but vary the amount of data she has access to as a listener. We then test Alice's performance as a listener (and vice-versa for Bob as a speaker). Fig. 2 shows the , with fraction of the full training data that Alice (Bob) sees as a speaker (listener) shown along the x-axis. The self-play models without direct supervision perform well: it appears their protocol transfers across roles with out drifting. This sheds some light on the performance drop between the "one role" and "two role" settings in Section 5.2.2. where the additional experience in the "two role" setting did not help. Fig. 2 shows that additional training in the primary role is unnecessary, so it is not surprising that training on disjoint features (train-2) is helpful. We measure whether self-play leads to better communication protocols in general. First, we measure the agents' speaking and listening capacities separately, using measures proposed by;. Specifically, positive signaling (S`) measures if the speaker's messages depend on the features of the referent, and positive listening (L`) measures if the listener's actions depend on the message 6. Table 5 shows that self-play improves the agents' communication as measured by accuracy as well as these orthogonal metrics. We also find that the model architectures and self-play impact the agents' lexicons. The recurrent models produces fewer unique messages than the transformer models (on average 110 versus 300), and often neglect to use all the vocabulary. Fig. 3 shows that self-play helps the recurrent model use more of the vocabulary, and leads to both the recurrent and transformer models to develop sparser mappings from symbols onto features. Work in emergent communication (; analyzes agents that develop a shared protocol by playing reference games . presented showing that computational models do not learn compositional protocols by default. Instead, the agents tend to develop brittle protocols that have trouble generalizing to novel items. Several approaches have been proposed which could encourage models to learn more generalizable representations of language, including compression , efficiency , memory constraints, pragmatic constraints , and positive biases to respond to other agents . Some work, like ours, assumes access to symbolic representations of referents and their attributes, whereas others' are set in pixel-based multi-agent games (; ;) or multi-agent grid-worlds . Our work also relates to a broader body of work on speaker-listener models, specifically pragmatically-informed models in which speakers reason recursively about listeners' beliefs (and vice-versa) . Such models have been used in applications such image captioning (; ;), and robotics (; ;), as well as in linguistics and psychology in order to explain complex linguistic inferences . Conceptually, our proposed internal-consistency constraints share something in common with these neural speaker-listener models developed outside of emergent communication. However, again, past work has tended to assume that a speaker's mental model of their listener is not necessarily consistent-in fact, it is often assumed explicitly to be inconsistent -with the way the speaker themself would behave as a listener. We note, however, that our proposed model architecture (because it lacks the recursion typical in other pragmatics models) is likely unable to handle the types of higher-level inferences (e.g. implicatures) targeted by the mentioned prior work on computational pragmatics, though this is an interesting avenue to explore. We propose three methods for encouraging dialog agents to follow "the golden rule": speak/listen to others as you would expect to be spoken/listened to. In the emergent communication setting, we find that the internal-consistency constraints do not systematically improve models' generalization to novel items, but both the self-play objective and shared embeddings significantly improve performance when agents are tested on roles they were not directly trained for. In fact, when trained in one role and tested on another, these internal-consistency constraints allow the agents to perform about as well as if they had been trained in the target role. We use the deep learning framework Pytorch 7 (v1.2.0) to implement our models (and Python 3.7.3). For reproducibility, all random seeds (random, numpy, torch) are arbitrarily set to 42. The general architecture, the four modules that comprise each agent are shown in Figure 4. The recurrent model decodes and encodes message as follows: to generate a message, the first input is the embedding of a SOS start-of-sentence symbol and the initial hidden state is set as the embedded referent (and the cell memory is all zeroes). From here, at each step, the outputted hidden state (P R D) is projected by the transposed word embeddings (E ⊺ message P R Dˆ|V |), and the next word is sampled from this ing distribution across the vocabulary. Moving forward, the next input is the embedding of the sampled word, and the hidden state and cell memory are set those emitted at the previous step. We produce words until the maximum length is reached. When encoding a message, a learned embedding is set to the first hidden state, and the input at each time step is the corresponding embedded word. The last hidden state is set as the encoding. In the symmetric variant of this model, the LSTM cell used for encoding and decoding is the same. This architecture underlies each model we use; only the implementation of the Decoder and Encoder modules vary between models. In the baseline models no parameters shared within an agent. In shared embedding models, the embeddings (purple) are shared across roles. In symmetric models, the encoder and decoder (pink) are shared across both roles. The blue modules are non-parametric. The transformer model decodes and encodes message as follows: to generate a message describing the referent auto-regressively, all the embeddings of the words produced so far M and the embedding of a NEXT symbol are concatenated together into a matrix X (Eq. 2). Next, the transformer decoder consumes this matrix and the referent embedding and produces a contextualized representation of the input matrix (Eq. 3). The last column vectorX:,W, which corresponds to the NEXT embedding, is the internal representation of the next word. The next word m is sampled from the projection of this representation with the transposed word embedding. More words are produced in this way until the maximum length message is formed. Producing the i`1th word (so M consists of the first i words), works as follows: To encode incoming messages when listening, all the embeddings of the words in the message plus the embedding of a ITEM symbol are concatenated together into a matrix X (Eq. 5). Then, the transformer encoder is used to produce a contextualized embeddingX (Eq. 6). The last column vectorX:,W, which corresponds to the ITEM embedding, is set as the message encoding. Note, W is the number of words in each message (and the length of M). X " TransformerEncoderpXq P R pW`1qˆD. r "X:,W A.3 SYMMETRIC AGENTS For the Symmetric Recurrent Model, a LSTM cell is shared between the encoder and decoder. Otherwise, the recurrent model is unchanged. For the Symmetric Transformer Model, Transformer Encoders and Transformer Decoders have different structures, so to share parameters between them, we have change either how the transformer agent speaks or how it listens. We opt to replace the Transformer Decoder with Transformer Encoder, and use it to decode messages in-place when speaking. The Symmetric Transformer uses the same mechanism for encoding messages when listening as the default transformer model (described directly above). However, it uses a Transformer Encoder when speaking instead of a Transformer Decode. When speaking, to produce the next symbol, the embeddings of the i words produced so far, the referent embedding, and the embedding of a NEXT symbol are concatenated together into a matrix X (Eq. 8). The Transformer Encoder then maps X to a contextualized representationX (Eq. 9). Finally, the column vector in X that corresponds to NEXT, is used to sample the next symbol: X " rE item prq; E message pM q; E ITEM s, A.4 HYPER PARAMETERS We uniformly sampled 25 hyperparameter configurations for each model architecture, experiment, and dataset split. In every case, we fixed the hidden size dimensionality and embedding dimensionality to be the same. We searched over three different learning rate schedulers: None (or no scheduler, ReduceLROnPlateau with a patience of 25, reduction factor of 0.1, and uses validation accuracy as its measure of progress, and CyclicLR rising from 0.00001 to the given learning rate over 500 batches and then declines towards 0.00001 for the rest of training (10,000 batches). This is similar to the Noam Update in. To save space, we relegate the hyper-parameter selections in our code bit.ly/internal-consistency-emergent-communicationsee /lib/hyperparamters.py. TRE takes two important hyperparameters, an error function and a composition function. We select the same choices as the author, for what amounts to the same task (producing a discrete message) as detailed in the original paper. This method requires structured feature representations, so we assume that the features in each item are entirely right-branching. The composition function is learned, and set the number of update steps to 1000. The original implementation is at https://github.com/jacobandreas/tre, and our modification is at bit.ly/internal-consistency-emergent-communication in the file./lib/compositionality.py. We simplify the SHAPES dataset in order to be able to empirically compute the positive listening and signaling scores, which requires iterating overall possible messages. In the original version of SHAPES this is impractical as there would be 50 10 possible messages. The details of this smaller version of SHAPES detailed in Table 9. We fix the settings for the recurrent and transformer models as we found that a majority of models across experiments used the same parameters. See Tables 10, 11. Furthermore, all in Sec. 6 are averaged over 5 arbitrary random seeds (both trained and tested). Positive listening was introduced by as Causal Influence of Communication, and the precursor to positive signaling was introduced by. We use the definitions modified for a one step game for both metrics: Positive Listening (L`). " D KL pP rpa|mq P r l paqq, Positive Speaking (S`). " Ipm, xq " Hpmq´Hpm|xq, where m is a given message, a are the actions the agent can take, x is state, D KL is the Kuller-bach divergence, I is the mutual information, H is the entropy. We can compute these quantities without sampling messages because the number of messages is tractable. In our setting, the game is a single step, and the average policy over actions independent of the message converges to the uniform distribution over actions, as the order of referents is randomized. Thus we have: L`" D KL pP rpa|mq U q. The positive speaking metric is computed over a sampling of the dataset (the distractors are random) as follows as in : S`" Hpmq´Hpm|xq, "´ÿ m πpmq log πpmq`E x r ÿ m πpm|xq log πpm|xqs, where the summations are over all possible messages, x is the given referent, and πpmq is empirically likelihood of the message being produced irrespective of x, and πpm|xq is the likelihood of the given message being produced given the referent x. We report empirical averages of L`and Sò ver all items in the dataset, also averaged over 5 arbitrary random seeds. Figs. 5, 6, 7, 8 show lexicons across random seeds. See Table 8 for additional in the vanilla setting for SHAPES-small that were elided for space.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkgJOAEtvr
Internal-consistency constraints improve agents ability to develop emergent protocols that generalize across communicative roles.
Neural networks (NNs) are able to perform tasks that rely on compositional structure even though they lack obvious mechanisms for representing this structure. To analyze the internal representations that enable such success, we propose ROLE, a technique that detects whether these representations implicitly encode symbolic structure. ROLE learns to approximate the representations of a target encoder E by learning a symbolic constituent structure and an embedding of that structure into E’s representational vector space. The constituents of the approximating symbol structure are defined by structural positions — roles — that can be filled by symbols. We show that when E is constructed to explicitly embed a particular type of structure (e.g., string or tree), ROLE successfully extracts the ground-truth roles defining that structure. We then analyze a seq2seq network trained to perform a more complex compositional task (SCAN), where there is no ground truth role scheme available. For this model, ROLE successfully discovers an interpretable symbolic structure that the model implicitly uses to perform the SCAN task, providing a comprehensive account of the link between the representations and the behavior of a notoriously hard-to-interpret type of model. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is also changed in the way predicted by our analysis. Finally, we use ROLE to explore whether popular sentence embedding models are capturing compositional structure and find evidence that they are not; we conclude by discussing how insights from ROLE can be used to impart new inductive biases that will improve the compositional abilities of such models. Certain AI tasks consist in computing a function ϕ that is governed by strict rules: e.g., if ϕ is the function mapping a mathematical expression to its value (e.g., mapping '19−2 * 7' to 5), then ϕ obeys the rule that ϕ(x + y) = sum(ϕ(x), ϕ(y)) for any expressions x and y. This rule is compositional: the output of a structure (here, x + y) is a function of the outputs of the structure's constituents (here, x and y). The rule can be stated with full generality once the input is assigned a symbolic structure giving its decomposition into constituents. For a fully-compositional task, completely determined by compositional rules, an AI system that can assign appropriate symbolic structures to inputs and apply appropriate compositional rules to these structures will display full systematic generalization: it will correctly process arbitrary novel combinations of familiar constituents. This is a core capability of symbolic AI systems. Other tasks, including most natural language tasks such as machine translation, are only partially characterizable by compositional rules because natural language is only partially compositional in nature. For example, if ϕ is the function that assigns meanings to English adjectives, it generally obeys the rule that ϕ(in-+ x) = not ϕ(x), (e.g., ϕ(inoffensive) = not ϕ(offensive)), yet there are exceptions: ϕ(inflammable) = ϕ(flammable). On these "partially-compositional" AI tasks, this strategy of compositional analysis has demonstrated considerable, but limited, generalization capabilities. Deep learning research has shown that Neural Networks (NNs) can display remarkable degrees of combinatorial generalization, often surpassing symbolic AI systems for partially-compositional tasks , and exhibit good generalization (although generally falling short of symbolic AI systems) on fully-compositional tasks (; a). Given that standard NNs have no obvious mechanisms for representing symbolic structures, parsing inputs into such structures, nor applying compositional symbolic rules to them, this success raises the question we address in this paper: How do NNs achieve such strong generalization on partially-compositional tasks, and good performance on fully-compositional tasks? An important step towards answering this question was reported in McCoy et al. (2019a), which showed that when trained on highly compositional tasks, standard NNs learned representations that were well approximated by symbolic structures (Sec. 2). Processing in these NNs assigns such representations to inputs and generates outputs that are governed by compositional rules stated over those representations. We refer to the networks to be analyzed as target NNs, because we will propose a new type of NN (in Sec. 4) -the Role Learner (ROLE) -which is used to analyze the target network, after first discussing related analysis methods in Sec. 3. In contrast with the analysis model of McCoy et al. (2019a), which relies on a hand-specified hypothesis about the underlying structure, ROLE automatically learns a symbolic structure that best approximates the internal representation of the target network. Automating the discovery of structural hypotheses provides two advantages. First, ROLE achieves success at analyzing networks for which it is not clear what the underlying structure is. We show this in Sec. 6, where ROLE successfully uncovers the symbolic structures learned by a seq2seq RNN trained on the SCAN task . Second, removing the need for hand-specified hypotheses allows the data to speak for itself, which simplifies the burden on the user, who only needs to provide input sequences and associated embeddings. We first consider fully-compositional (hence synthetic) tasks: a simple string-manipulation task in Sec. 5, and the richer SCAN task, which has been the basis of previous work on combinatorial generalization in NNs, in Sec. 6. Discovering symbolic structure within a model enables us to perform precise alterations to the internal representations in order to produce desired combinatorial alterations in the output (Sec. 6.3). Then, in Sec. 7, we turn briefly to partially-compositional tasks in NLP. In Sec. 8 we consider how what we have learned about standard NNs can suggest new inductive biases to strengthen compositionality in NN learning. We build on McCoy et al. (2019a), which introduced the analysis task DISCOVER (DISsecting COmpositionality in VEctor Representations): take a NN and, to the extent possible, find an explicitly-compositional approximation to its internal distributed representations. McCoy et al. (2019a) showed that, in GRU encoder-decoder networks performing simple, fullycompositional string manipulations, the medial encoding (between encoder and decoder) could be extremely well approximated, up to a linear transformation, by Tensor Product Representations (TPRs) , which are explicitly-compositional vector embeddings of symbolic structures. To represent a string of symbols as a TPR, the symbols in the string 337 might be parsed into three constituents {3 : pos1, 7 : pos3, 3 : pos2}, where posn is the role of n th position from the left edge of the string; other role schemes are also possible, such as roles denoting right-to-left position: {3 : third-to-last, 3 : second-to-last, 7 : last}. The embedding of a constituent 7: pos3 is e(7 : pos3) = e F ⊗ e R (pos3), where e R, e F are respectively a vector embedding of the roles and a vector embedding of the fillers of those roles: the digits. The embedding of the whole string is the sum of the embeddings of its constituents. In general, for a symbol structure S with roles {r k} that are respectively filled by the symbols {f k}, e TPR (S) = k e F (f k) ⊗ e R (r k). This work falls within the larger paradigm of using analysis techniques to interpret NNs (see for a recent survey), often including a focus on compositional structure (; ;). Two of the most popular analysis techniques are the behavioral and probing approaches. In the behavioral approach, a model is evaluated on a set of examples carefully chosen to require competence in particular linguistic phenomena (; ; ; ; ; b). This technique can illuminate behavioral shortcomings but says little about how the internal representations are structured, treating the model as a black box. In the probing approach, an auxiliary classifier is trained to classify the model's internal representations based on some linguistically-relevant distinction (; ; ; ;). In contrast with the behavioral approach, the probing approach tests whether some particular information is present in the model's encodings, but it says little about whether this information is actually used by the model. Indeed, in at least some cases models will fail despite having the necessary information to succeed in their representations, showing that the ability of a classifier to extract that information does not mean that the model is using it . DISCOVER bridges the gap between representation and behavior: It reveals not only what information is encoded in the representation, but also shows how that information is causally implicated in the model's behavior. Moreover, it provides a much more comprehensive window into the representation than the probing approach does; while probing extracts particular types of information from a representation (e.g., "does this representation distinguish between active and passive sentences?"), DISCOVER exhaustively decomposes the model's representational space. In this regard, DISCOVER is most closely related to the approaches of , Chrupała & , and , who also propose methods for discovering a complete symbolic characterization of a set of vector representations, and and , which also seek to extract more interpretable symbolic models that approximate neural network behavior. 1 produces a vector-space embedding of an input string of T symbols S = s 1 s 2... s T by producing a TPR T(S) and then passing it through a linear transformation W. ROLE is trained to approximate a pre-trained target string-encoder E. Given a set of N training strings {S,..., S (N) }, ROLE minimizes the total mean-squared error (MSE) between its output W T(S (i) ) and E's corresponding output, E(S (i) ). Figure 1: The role learning module. The role attention vector a t is encouraged to be one-hot through regularization; if a t were one-hot, the produced role embedding r t would correspond directly to one of the roles defined in the role matrix R. The LSTM can be unidirectional or bidirectional. ROLE is an extension of the Tensor-Product Encoder (TPE) introduced in McCoy et al. (2019a) (as the "Tensor Product Decomposition Network"), which produces a linearly-transformed TPR given a string of symbols and pre-assigned role labels for each symbol (see Appendix A.1 for details). Crucially, ROLE is not given role labels for the input symbols, but learns to compute them. More precisely, it learns a dictionary of n R d R -dimensional role-embedding vectors, R ∈ R dR×nR, and, for each input symbol s t, computes a soft-attention vector a t over these role vectors: the role vector assigned to s t is then the attention-weighted linear combination of role vectors, r t = R a t. ROLE simultaneously learns a dictionary of n F d F -dimensional symbol-embedding filler vectors F ∈ R dF×nF, the φ th column of which is f φ, the embedding of symbol type φ; φ ∈ 1,..., n F where n F is the size of the vocabulary of symbol types. The TPR generated by ROLE is thus T(S) = T t=1 f τ (st) ⊗ r t, where τ (s t) is symbol s t's type. Finally, ROLE learns a linear transformation W to map this TPR into R d, where d is the dimension of the representations of the encoder E it is learning to approximate. ROLE uses an LSTM to compute the role-assigning attentionvectors a t from its learned embedding F of the input symbols s t: at each t, the hidden state of the LSTM passes through a linear layer and then a softmax to produce a t (depicted in Figure 1). Since a TPR for a discrete symbol structure deploys a discrete set of roles specifying discrete structural positions, ideally a single role would be selected for each s t: a t would be one-hot. ROLE training therefore deploys regularization to bias learning towards one-hot a t vectors (based on the regularization proposed in , developed for the same purpose). See Appendix A.4 for the precise regularization terms that we used. Although the regularization can yield good , performing continuous gradient descent on a nearly discrete symbolic landscape can cause networks to get stuck in local optima where the learned symbolic structure is not ideal. For this reason, the performance at convergence can vary appreciably across multiple runs. It is important to note that, while we impose this regularization on ROLE, there is no explicit bias favoring discrete compositional representations in the target encoder E: any such structure that ROLE finds hidden in the representations learned by E must be the of biases implicit in the vanilla RNN-architecture of E when applied to its target task. We first apply ROLE to two target Tensor Product Encoder (TPE) models which are fully compositional by design. Since we know what role scheme each target model deploys, we can test how well ROLE learns these ground-truth roles. The TPEs are trained on the fully compositional task of autoencoding sequences of digits. We use two types of TPEs: one that uses a simple left-to-right role scheme (e.g., first, second, third) and one that uses a complex tree position role scheme (e.g., left child of the root of the tree, right child of the left child of the left child of the root of the tree), where the trees are generated from digit sequence inputs using a deterministic parsing algorithm (see Appendix A.2 for explanations and examples of the designed role schemes). The left-to-right TPE was paired with a unidirectional RNN decoder, while the tree-position TPE was paired with the tree-RNN decoder used by McCoy et al. (2019a). Both of these target models attained near-perfect performance on the autoencoding task (Table 1). Once the encoders are finished training, we extract the encoding for each sequence in the dataset and use this to train ROLE. See Appendices A.3 and A.5 for additional training details. Table 1 reports the approximation performance of ROLE in two ways. Substitution Accuracy is the proportion of the items for which the decoder produces the correct output string when it is fed the ROLE approximation. The V-Measure assesses the extent to which the clustering of the role vectors assigned by ROLE matches the ground truth role assignments. The ROLE approximation of the left-to-right TPE attained perfect performance, with a substitution accuracy of 100% and a V-Measure of 1.0, indicating that the role scheme it learned perfectly matched the ground truth. On the significantly more complex case of tree position roles, ROLE achieves 2 Let the t th LSTM hidden state be qt ∈ R H; let the output-layer weight-matrix have rows k ρ ∈ R H and let the columns of R be vρ ∈ R d R, with ρ = 1,..., nR. Then rt = R at = n R ρ=1 vρ softmax(k ρ qt): the of query-key attention (e.g.,) with query qt to a fixed external memory containing key-value pairs {(kρ, vρ)} n R ρ=1. essentially the same accuracy as the target encoder E and has considerable success at recovering the ground truth roles for the vectors it was analyzing. These show that, when a target model has a known fully compositional structure, ROLE can successfully find that structure. We have established that ROLE can uncover the compositional structure used by a model that is compositional by design. But, returning to our central question from Sec. 1, how can models without explicit compositional structure (namely, standard RNNs) still be as successful at fully compositional tasks as fully compositional models? Our hypothesis is that, though these models have no constraint forcing them to be compositional, they still have the ability to implicitly learn compositional structure. To test this hypothesis, we apply ROLE to a standard RNN-based seq2seq model trained on a fully compositional task. Because the RNN has no constraint forcing it to use TPRs, we do not know a priori whether there exists any solution that ROLE could learn; thus, if ROLE does learn anything it will be a significant empirical finding about how these RNNs operate. We consider the SCAN task , which was designed to test compositional generalization and systematicity. SCAN is a synthetic sequence-to-sequence mapping task, with an input sequence describing an action plan, e.g., jump opposite left, being mapped to a sequence of primitive actions, e.g., TL TL JUMP (see Sec. 6.3 for a complex example). We use TL to abbreviate TURN_LEFT, sometimes written LTURN; similarly, we use TR for TURN_RIGHT. The SCAN mapping is defined by a complete set of compositional rules (, Supplementary Fig. 7). For our target SCAN encoder E, we trained a standard GRU with one hidden layer of dimension 100 for 100,000 steps (batch-size 1) with a dropout of 0.1 on the simple train-test split. (The hidden dimension and dropout rate were determined by a limited hyper-parameter search; see Appendix A.6). E achieves 98.47% (full-string) accuracy on the test set. Thus E provides what we want: a standard RNN achieving near-perfect accuracy on a non-trivial fully compositional task. After training, we extract the final hidden embedding from the encoder for each example in the training and test sets. These are the encodings we attempt to approximate as explicitly compositional TPRs. We provide ROLE with 50 roles to use as it wants (additional training information is in Appendix A.7). We evaluate the substitution accuracy that this learned role scheme provides in three ways. The continuous method tests ROLE in the same way as it was trained, with input symbol s t assigned role vector r t = R a t. The continuous method does not use a discrete set of role vectors because the weighted sum that generates a t allows for continuously-valued weights. The remaining two methods test the efficacy of a truly discrete set of role vectors. First, in the snapped method, a t is replaced at evaluation time by the one-hot vector m t singling out role m t = arg max(a t): r t = R m t. This method serves the goal of enforcing the discreteness of roles, but it is expected to decrease performance because it tests ROLE in a different way than it was trained. Our final evaluation method, the discrete method, uses discrete roles without having such a train/test discrepancy; in this method, we use the one-hot vector m t to output roles for every symbol in the dataset and then train a TPE which does not learn roles but rather uses the one-hot vector m t as input during training. In this case, ROLE acts as an automatic data labeler, assigning a role to every input word. For comparison, we also train TPEs using a variety of discrete hand-crafted role schemes: left-to-right (LTR), right-to-left (RTL), bidirectional (Bi), tree position, Wickelrole (Wickel), and bag-of-words (BOW) (additional information provided in Appendix A.2). The substitution accuracy from these different methods is shown in Table 2. All of the predefined role schemes provide poor approximations, none surpassing 44.12% accuracy. The role scheme learned by ROLE does significantly better than any of the predefined role schemes: when tested with the basic, continuous role-attention method, the accuracy is 94.12%. The success of ROLE tells us two things. First, it shows that the target model's compositional behavior relies on compositional internal representations: it was by no means guaranteed to be the case that ROLE would be successful here, so the fact that it is successful tells us that the encoder has learned compositional representations. Second, it adds further validation to the efficacy of ROLE, because it shows that it can be a useful analysis tool in cases of significantly greater complexity than the autoencoding task. Analyzing the roles assigned by ROLE to the sequences in the SCAN training set, we created a symbolic algorithm for predicting which role will be assigned to a given filler. This is described in Appendix A.8.1 and discussed at some length in Appendix A.8.2. Though the algorithm was created based only on sequences in the SCAN training set, it is equally successful at predicting which roles will be assigned to test sequences, exactly matching ROLE's predicted roles for 98.7% of sequences. The details of this algorithm illuminate how the filler-role scheme encodes information relevant to the task. First, one of the initial facts that the decoder must determine is whether the sequence is a single command, a pair of commands connected by and, or a pair of commands connected by after; such a determination is crucial for knowing the basic structure of the output (how many actions to perform and in what order). We have found that role 30 is used for, and only for, the filler and, while role 17 is used in and only in sequences containing after (usually with after as the filler bound to role 17). Thus, the decoder can use these roles to tell which basic structure is in play: if role 30 is present, it is an and sequence; if role 17 is present, it is an after sequence; otherwise it is a single command. Once the decoder has established the basic syntactic structure of the output, it must then fill in the particular actions. This can be accomplished using the remaining roles, which mainly encode absolute position within a command. For example, the last word of a command before after (e.g., jump left after walk twice) is always assigned role 8, while the last word of a command after after (e.g., jump left after walk twice) is always assigned role 46. Therefore, once the decoder knows (based on the presence of role 17) that it is dealing with an after sequence, it can check for the fillers bound to roles 8 and 46 to begin to figure out what the two subcommands surrounding after look like. The identity of the last word in a command is informative because that is where a cardinality (i.e., twice or thrice) appears if there is one. Thus, by checking what filler is at the end of a command, the model can learn whether there is a cardinality present and, if so, which one. This description of how the decoding could take place does not necessarily match how it actually does take place; for example, it is likely that some of the steps we have described as occurring serially, for expository simplicity, actually occur in parallel. We leave for future work the question of which operations are actually being performed and how those operations are instantiated in an RNN. To address this causal question , we actively intervene on the constituent structure of the internal representations by replacing one constituent with another syntactically equivalent one 4, and see whether this produces the expected change in the output of the decoder. We take the encoding generated by the RNN encoder E for an input such as jump opposite left, subtract the vector embedding of the opposite constituent, add the embedding of the around constituent, and see whether this causes the output to change from the correct output for jump opposite left (TL TL JUMP) to the correct output for jump around left (TL JUMP TL JUMP TL JUMP TL JUMP). The roles in these constituents are determined by the algorithm of Appendix A.8. If changing a word leads other roles in the sequence to change (according to the algorithm), we update the encoding with those new roles as well. Such surgery can be viewed as based in a more general extension of the analogy approach used by for analysis of word embeddings. An example of applying a sequence of five such constituent surgeries to a sequence are shown in Figure 2 (left). The previous sections explored fully-compositional tasks where there is a strong signal for compositionality. In this section, we explore whether the representations of NNs trained on tasks that are only partially-compositional also capture compositional structure. Partially-compositional tasks are especially challenging to model because a fully-compositional model may enforce compositionality too strictly to handle the non-compositional aspects of the task, while a model without a compositional bias may not learn any sort of compositionality from the weak cues in the training set. We test four sentence encoding models for compositionality: InferSent , Skipthought , Stanford Sentiment Model (SST) , and SPINN . For each of these models, we extract the encodings for the SNLI premise sentences . We use the extracted embeddings to train ROLE with 50 roles available (additional training information provided in Appendix A.10). of cognitive representations must individually be causally efficacious in order for those constituents to provide an explanation of the compositionality of cognition . That TPRs meet the challenge of explaining compositionality was argued in Smolensky (1987; 1991). 4 We extract syntactic categories from the SCAN grammar (, Supplementary Fig. 6) by saying that two words belong to the same category if every occurrence of one could be grammatically replaced by the other. Based on our analysis in Appendix A.8, we do not replace occurrences of and and after since the presence of either of these words causes substantial changes in the roles assigned to the sequence. As a baseline, we also train TPEs that use pre-defined role schemes (additional training information in Appendix A.9). For all of the sentence embedding models except Skip-thought, ROLE with continuous attention provides the lowest mean squared error at approximating the encoding (Table 3). The BOW (bag-of-words) role scheme represents a TPE that does not use compositional structure by assigning the same role to every filler; for each of the sentence embedding models tested except for SST, performance is within the same order of magnitude as structure-free BOW. found that a bag-of-words model scores extremely well on Natural Language Inference despite having no knowledge of word order, showing that structure is not necessary to perform well on the sorts of tasks commonly used to train sentence encoders. Although not definitive, these provide no evidence that these sentence embedding models rely on compositional representations. TPRs provide NNs an aspect of the systematicity of symbolic computation by disentangling fillers and roles. The learner needs to learn to process fillers -providing, essentially, what each input constituent contributes to the output -and to process roles -essentially, how these contributions are used in the output. In this work, we used ROLE to interpret the workings of a target encoder E, and in future work, we plan to train ROLE in an end-to-end manner, either using it as the encoder itself, or using it to regularize a standard (e.g., RNN) encoder with a loss term that rewards learning compositional encodings that ROLE can approximate well. We will test whether such an explicit bias for compositionality allows networks to train faster, or with fewer parameters, and to achieve more systematic generalization. Recent work showed improvements in compositionality by separating out syntax and semantics with attention , and our suggest that ROLE can also disentangle syntax and semantics. The structured representations that TPRs provide allow us to structure the processing that occurs over these representations. In fully-compositional tasks, the processing of fillers and roles can be encoded (and hence in principle learned) independently. In partially-compositional tasks, the processing of fillers and roles may be approximately encoded independently, with key interactions when a task deviates from full compositionality. In this work, we showed how a hidden embedding can be factored into fillers and roles. Similarly, it is possible to factor a weight matrix into weights that process roles and weights that process fillers. An illustration of this with a simple example from the SCAN task is provided in Appendix A.11. We plan to explore whether providing networks with a bias favoring weight matrices that factor in this way can improve systematic generalization. We have introduced ROLE, a neural network that learns to approximate the representations of an existing target neural network E using an explicit symbolic structure. ROLE successfully discovers symbolic structure both in models that explicitly define this structure and in an RNN without explicit structure trained on the fully-compositional SCAN task. When applied to sentence embedding models trained on partially-compositional tasks, ROLE performs better than hand-specified role schemes but still provides little evidence that the sentence encodings represent compositional structure. Uncovering the latent symbolic structure of NN representations on fully-compositional tasks is a significant step towards explaining how they can achieve the level of compositional generalization that they do, and suggests types of inductive bias to improve such generalization for partially-compositional tasks. Figure 3: The Tensor Product Encoder architecture. The yellow circle is an embedding layer for the fillers, and the blue circle is an embedding layer for the roles. These two vector embeddings are combined by an outer product to produce the green matrix representing the TPR of the constituent. All of the constituents are summed together to produce the TPR of the sequence, and then a linear transformation is applied to resize the TPR to the target encoders dimensionality. ROLE replaces the role embedding layer and directly produces the blue role vector. We use six hand-specified role schemes as a baseline to compare the learned role schemes against. Examples of each role scheme are shown in Table 4. We trained two TPEs end-to-end with an RNN decoder for our target networks on the digit sequence tasks. The left-to-right (LTR) TPE used a left-to-right role scheme applied to each element in the input and was connected to a unidirectional GRU decoder. The tree TPE used tree positions for each element in the input and was connected to a tree GRU decoder; here, the digit strings were parsed as a binary tree by a deterministic algorithm given in McCoy et al. (2019a, App. C). The filler and role dimensions were both 20 for the LTR TPE. The filler dimension was 20 for the Tree TPE, and the role dimension was 120. We used a hidden size of 60 for the GRU decoders. We used a patience of 2 for early stopping. The left-to-right TPE achieves 100% accuracy on the test set and the tree TPE achieves 98.62% on the test set. Letting A = {a t} T t=1, the regularization term applied during ROLE training is R = λ(R 1 +R 2 +R 3), where λ is a regularization hyperparameter and: Since each a t from a softmax, its elements are positive and sum to 1. Thus the factors in R 1 (A) are all non-negative, so R 1 assumes its minimal value of 0 when each a t has binary elements; since these elements must sum to 1, such an a t must be one-hot. R 2 (A) is also minimized when each a t is one-hot because when a vector's L 1 norm is 1, its L 2 norm is maximized when it is one-hot. Although each of these terms individually favor one-hot vectors, empirically we find that using both terms helps the training process. In a discrete symbolic structure, each position can hold at most one symbol, and the final term R 3 in ROLE's regularizer R is designed to encourage this. In the vector s A = T t=1 a t, the ρ th element is the total attention weight, over all symbols in the string, assigned to the ρ th role: in the discrete case, this must be 0 (if no symbol is assigned this role) or 1 (if a single symbol is assigned this role). Thus R 3 is minimized when all elements of s are 0 or 1 (R 3 is similar to R 1, but with squared terms since we are no longer assured each element is at most 1). It is important to normalize each role embedding in the role matrix R so that small attention weights have correspondingly small impacts on the weighted-sum role embedding. Once the TPEs in Sec. A.3 were trained, we extracted the hidden embedding for each item in the training, dev, and test sets. For both ROLE models trained on the digit sequence task, we used a bidirectional 2-layer LSTM with filler dimension of 20, and regularization constant λ = 1. For training, we used the ADAM optimizer with a learning rate of.001, batch size 32, and an early stopping patience of 10. The ROLE model trained on the LTR TPE was given 20 roles each of dimension 20. The ROLE model trained on the Tree TPE was given 120 roles each of dimension 120. To train the standard RNN on SCAN, we ran a limited hyperparameter search similar to the procedure in. Since our goal was to produce a single embedding that captured the entire input sequence, we fixed the architecture to GRU with a single hidden layer. We did not train models with attention, since we wanted to investigate whether a standard RNN could capture compositionality. The remaining hyperparameters were hidden dimension and dropout. We ran a search over the hidden dimension sizes of 50, 100, 200, and 400 as well as dropout with a value of 0,.1, and.5 applied to the word embeddings and recurrent layer. Each network was trained with the ADAM optimizer and a learning rate of.001 for 100,000 steps with a batch-size of 1. The best performing network had a hidden dimension or 100 and dropout of.1. For the ROLE models trained to approximate the GRU encoder trained on SCAN, we used a filler dimension of 100, a role dimension of 50 with 50 roles available. For training, we used the ADAM optimizer with a learning rate of.001, batch size 32, and an early stopping patience of 10. The role assignment module used a bidirectional 2-layer LSTM . We performed a hyperparameter search over the regularization coefficient λ using the values in the set [.1, .02, .01]. The best performing value was.02, and we used this model in our analysis. The algorithm below characterizes our post-hoc interpretation of which roles the Role Learner will assign to elements of the input to the SCAN model. This algorithm was created by hand based on an analysis of the Role Learner's outputs for the elements of the SCAN training set. The algorithm works equally well on examples in the training set and the test set; on both datasets, it exactly matches the roles chosen by the Role Learner for 98.7% of sequences (20,642 out of 20,910). The input sequences have three basic types that are relevant to determining the role assignment: sequences that contain and (e.g., jump around left and walk thrice), sequences that contain after (e.g., jump around left after walk thrice), and sequences without and or after (e.g., turn opposite right thrice). Within commands containing and or after, it is convenient to break the command down into the command before the connecting word and the command after it; for example, in the command jump around left after walk thrice, these two components would be jump around left and walk thrice. • Sequence with and: -Elements of the command before and: * Last word: 28 * First word (if not also last word): 46 * opposite if the command ends with thrice: 22 * Direction word between opposite and thrice: 2 * opposite if the command does not end with thrice: 2 * Direction word after opposite but not before thrice: 4 * around: 22 * Direction word after around: 2 * Direction word between an action word and twice or thrice: 2 -Elements of the command before and: * First word: 11 * Last word (if not also the first word): 36 * Second-to-last word (if not also the first word): 3 * Second of four words: 24 -and: 30 • Sequence with after: -Elements of the command before after: * Last word: 8 * Second-to-last word: 36 * First word (if not the last or second-to-last word): 11 * Second word (if not the last or second-to-last word): 3 -Elements of the command after after: * Last word: 46 * Second-to-last word: 4 * First word if the command ends with around right: 4 * First word if the command ends with thrice and contains a rotation: 10 * First word if the command does not end with around right and does not contain both thrice and a rotation: 17 * Second word if the command ends with thrice: 17 * Second word if the command does not end with thrice: 10 -after: 17 if no other word has role 17 or if the command after after ends with around left; 43 otherwise • Sequence without and or after: -Action word directly before a cardinality: 4 -Action word before, but not directly before, a cardinality: 34 -thrice directly after an action word: 2 -twice directly after an action word: 2 -opposite in a sequence ending with twice: 8 -opposite in a sequence ending with thrice: 34 -around in a sequence ending with a cardinality: 22 -Direction word directly before a cardinality: 2 -Action word in a sequence without a cardinality: 46 -opposite in a sequence without a cardinality: 2 -Direction after opposite in a sequence without a cardinality: 26 -around in a sequence without a cardinality: 3 -Direction after around in a sequence without a cardinality: 22 -Direction directly after an action in a sequence without a cardinality: 22 To show how this works with an example, consider the input jump around left after walk thrice. The command before after is jump around left. left, as the last word, is given role 8. around, as the second-to-last word, gets role 36. jump, as a first word that is not also the last or second-to-last word gets role 11. The command after after is walk thrice. thrice, as the last word, gets role 46. walk, as the second-to-last word, gets role 4. Finally, after gets role 17 because no other elements have been assigned role 17 yet. These predicted outputs match those given by the Role Learner. We offer several observations about this algorithm. 1. This algorithm may seem convoluted, but a few observations can illuminate how the roles assigned by such an algorithm support success on the SCAN task. First, a sequence will contain role 30 if and only if it contains and, and it will contain role 17 if and only if it contains after. Thus, by implicitly checking for the presence of these two roles (regardless of the fillers bound to them), the decoder can tell whether the output involves one or two basic commands, where the presence of and or after leads to two basic commands and the absence of both leads to one basic command. Moreover, if there are two basic commands, whether it is role 17 or role 30 that is present can tell the decoder whether the input order of these commands also corresponds to their output order (when it is and in play, i.e., role 30), or if the input order is reversed (when it is after in play, i.e., role 17). With these basic structural facts established, the decoder can begin to decode the specific commands. For example, if the input is a sequence with after, it can begin with the command after after, which it can decode by checking which fillers are bound to the relevant roles for that type of command. It may seem odd that so many of the roles are based on position (e.g., "first word" and "second-to-last word"), rather than more functionally-relevant categories such as "direction word." However, this approach may actually be more efficient: Each command consists of a single mandatory element (namely, an action word such as walk or jump) followed by several optional modifiers (namely, rotation words, direction words, and cardinalities). Because most of the word categories are optional, it might be inefficient to check for the presence of, e.g., a cardinality, since many sequences will not have one. By contrast, every sequence will have a last word, and checking the identity of the last word provides much functionally-relevant information: if that word is not a cardinality, then the decoder knows that there is no cardinality present in the command (because if there were, it would be the last word); and if it is a cardinality, then that is important to know, because the presence of twice or thrice can dramatically affect the shape of the output sequence. In this light, it is unsurprising that the SCAN encoder has implicitly learned several different roles that essentially mean the last element of a particular subcommand. 2. The algorithm does not constitute a simple, transparent role scheme. But its job is to describe the representations that the original network produces, and we have no a priori expectation about how complex that process may be. The role-assignment algorithm implicitly learned by ROLE is interpretable locally (each line is readily expressible in simple English), but not intuitively transparent globally. We see this as a positive , in two respects. First, it shows why ROLE is crucial: no human-generated role scheme would provide a good approximation to this algorithm. Such an algorithm can only be identified because ROLE is able to use gradient descent to find role schemes far more complex than any we would hypothesize intuitively. This enables us to analyze networks far more complex than we could analyze previously, being necessarily limited to hand-designed role schemes based on human intuitions about how to perform the task. Second, when future work illuminates the computation in the original SCAN GRU seq2seq decoder, the baroqueness of the role-assignment algorithm that ROLE has shown to be implicit in the seq2seq encoder can potentially explain certain limitations in the original model, which is known to suffer from severe failures of systematic generalization outside the training distribution . It is reasonable to hypothesize that systematic generalization requires that the encoder learn an implicit role scheme that is relatively simple and highly compositional. Future proposals for improving the systematic generalization of models on SCAN can be examined using ROLE to test the hypothesis that greater systematicity requires greater compositional simplicity in the role scheme implicitly learned by the encoder. 3. While the role-assignment algorithm of A.8.1 may not be simple, from a certain perspective, it is quite surprising that it is not far more complex. Although ROLE is provided 50 roles to learn to deploy as it likes, it only chooses to use 16 of them (only 16 are ever selected as the arg max(a t); see Sec. 6.1). Furthermore, the SCAN grammar generates 20,910 input sequences, containing a total of 151,688 words (an average of 7.25 words per input). This means that, if one were to generate a series of conditional statements to determine which role is assigned to each word in every context, this could in theory require up to 151,688 conditionals (e.g., "if the filler is 'jump' in the context 'walk thrice after opposite left', then assign role 17"). However, our algorithm involves just 47 conditionals. This reduction helps explain how the model performs so well on the test set: If it used many more of the 151,688 possible conditional rules, it would completely overfit the training examples in a way that would be unlikely to generalize. The 47-conditional algorithm we found is more likely to generalize by abstracting over many details of the context. 4. Were it not for ROLE's ability to characterize the representations generated by the original encoder in terms of implicit roles, providing an equally complete and accurate interpretation of those representations would necessarily require identifying the conditions determining the activation level of each of the 100 neurons hosting those representations. It seems to us grossly overly optimistic to estimate that each neuron's activation level in the representation of a given input could be characterized by a property of the input statable in, say, two lines of roughly 20 words/symbols; yet even then, the algorithm would require 200 lines, whereas the algorithm in A.8.1 requires 47 lines of that scale. Thus, by even such a crude estimate of the degree of complexity expected for an algorithm describing the representations in terms of neuron activities, the algorithm we find, stated over roles, is 4 times simpler. For each sentence embedding model, we trained three randomly initialized TPEs for each role scheme and selected the best performing one as measured by the lowest MSE. For each TPE, we used the original filler embedding from the sentence embedding model. This filler dimensionality is 25 for SST, 300 for SPINN and InferSent, and 620 for Skipthought. We applied a linear transformation to the pre-trained filler embedding where the input size is the dimensionality of the pre-trained embedding and the output size is also the dimensionality of the pre-trained embedding. This linearly transformed embedding is used as the filler vector in the filler-role binding in the TPE. For each TPE, we use a role dimension of 50. Training was done with a batch size of 32 using the ADAM optimizer with a learning rate of.001. To generate tree roles from the English sentences, we used the constituency parser released in version 3.9.1 of Stanford CoreNLP . For each sentence embedding model, we trained three randomly initialized ROLE models and selected the best performing one as measured by the lowest MSE. We used the original filler embedding from the sentence embedding model (25 for SST, 300 for SPINN and InferSent, and 620 for Skipthought). We applied a linear transformation to the pre-trained filler embedding where the input size is the dimensionality of the pre-trained embedding and the output size is also the dimensionality of the pre-trained embedding. This linearly transformed embedding is used as the filler vector in the filler-role binding in the TPE. We also applied a similar linear transformation to the pre-trained filler embedding before input to the role learner LSTM. For each ROLE model, we provide up to 50 roles with a role dimension of 50. Training was done with a batch size of 32 using the ADAM optimizer with a learning rate of.001. We performed a hyperparameter search over the regularization coefficient λ using the values in the set {1, 0.1, 0.01, 0.001, 0.0001}. For SST, SPINN, InferSent and SST, respectively, the best performing network used λ = 0.001, 0.01, 0.001, 0.1. At the end of Sec. 8 we remarked: "the structured representation bias that TPRs provide allows us to also provide biases to structure the processing that occurs over these representations. In fullycompositional tasks, the processing of fillers and roles can be encoded (and hence in principle learned) independently.... In this work, we showed how a hidden embedding can be factored into fillers and roles. Similarly, it is possible to factor a weight matrix into weights that process roles and weights that process fillers." Here we exemplify this with SCAN. One of the compositional rules for the mapping ϕ defined by the SCAN task is: ϕ(x twice) = ϕ(x) ϕ(x); for example, ϕ(walk twice) = ϕ(walk) ϕ(walk) = WALK WALK. For the purposes of illustration, suppose that, given the input string x twice, a NN encoder produces a representation that is approximated by the TPR of a single-constituent structure in which the filler x is assigned the role "argument of twice": x: r 2 ce -arg. So the TPR encoding of jump twice is e(jump twice) = e(jump : r 2 ce -arg) = e F (jump) ⊗ e R (r 2 ce -arg), where e F, e R are the embedding functions for fillers and roles, respectively. Let us also suppose that the output string is encoded as a TPR with positional roles R i, so that WALK LOOK has filler:role bindings {WALK : R 1, LOOK : R 2} and so has TPR e ({WALK : R 1, LOOK : R 2}) = e F (WALK) ⊗ e R (R 1) + e F (LOOK) ⊗ e R (R 2). So to generalize correctly to the input jump twice, producing output JUMP JUMP, the system needs to learn two things: how to map the filler -ϕ F (jump) = JUMP -and how to map the role -ϕ R (r 2 ce -arg) = {R 1, R 2}. The desired mapping is ϕ: e(jump : r 2 ce -arg) → e(JUMP JUMP), that is, ϕ: e F (jump) ⊗ e R (r 2 ce -arg) → e F (JUMP) ⊗ e R (R 1) + e F (JUMP) ⊗ e R (R 2). We now show that this can be accomplished through the separate filler-and role-mappings ϕ F: e F (jump) → e F (JUMP) and ϕ R: e R (r 2 ce -arg) → e R (R 1) + e R (R 2). To do this, we show that if ϕ F, ϕ R are respectively computed over embedding vectors through weight matrices W F, W R in a NN, then the weight tensor W = W F ⊗ W R will correctly map the embedding of jump twice to the embedding of JUMP JUMP. )] = e F (JUMP)⊗[e R (R 1)+e R (R 2)] = e F (JUMP)⊗e R (R 1)+e F (JUMP)⊗e R (R 2) = e(JUMP JUMP), as desired. This suggests that greater compositional generalization might be achieved by future models that are explicitly biased to utilize TPRs as internal representations and explicitly biased to factor their processing weights into those that process fillers (independently of their roles) and those that process roles (independently of their fillers), as in the SCAN example with W = W F ⊗ W R above.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BklMDCVtvr
We introduce a new analysis technique that discovers interpretable compositional structure in notoriously hard-to-interpret recurrent neural networks.
The vertebrate visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks. This predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system. Why did natural selection shape our visual representations to be the way they are? Traditionally, the properties of the early visual system have been explained with theories of efficient coding, which are based on the premise that the neural representations are optimal at preserving information about the visual scene, under a set of metabolic constraints such as total firing rate or total number of synapses. These theories can successfully account for the antagonistic center-surround structure of receptive fields (RFs) found in the retina BID0 BID18 BID35 BID21 BID13, as well as for the oriented structure of RFs found in the primary visual cortex V1 BID30 BID3.However, a number of properties of the early visual system remain unexplained. First, it is unclear why RF geometries would be so different in the retina and V1. A study BID36 has proposed that both representations are optimal at preserving visual information under different metabolic constraints: a constraint on total number of synapses for the retina, and one on total firing rate in V1. However, it is unclear why the two systems would be optimized for these two different objectives. Second, there is a great diversity of ganglion cell types at the output the retina BID17, with each cell type tiling the entire visual field and performing a specific computation. Interestingly, some of these types perform a highly nonlinear computation, extracting specific, behaviorally-relevant cues from the visual scene (e.g. direction-selective cells, objectmotion-selective cells), whereas other types are better approximated by a quasi-linear model, and respond to a broad range of stimuli (e.g. midget cells in the primate BID32 and quasi-linear pixel-encoders in the mouse BID20). Intriguingly, although quasi-linear and more nonlinear types exist in species of all sizes (e.g. primate parasol cells are nonlinear BID10), the proportion of cells performing a rather linear encoding versus a nonlinear feature detection seems to vary across species. For example, the most common ganglion cell type in the primate retina is fairly well approximated by a quasi-linear pixel-encoder (midget cells, 50% of all cells and >95% in the central retina BID32 BID11), whereas the most common cell type in mouse acts as a specific feature detector, thought to serve as an alarm system for overhead predators (W3 cells, 13% of all ganglion cells BID38). Again, theories of efficient coding have not been able to account for this diversity of computations found across cell types and across species. The limitations of current efficient coding theories might reside in the simplistic assumption that the objective is to simply relay indiscriminately all visual information to the next stages of processing. Indeed, the ultimate goal of the visual system is to extract meaningful features from the visual scene in order to produce an adequate behavioral response, not necessarily to faithfully encode it. A recent line of work has proposed using the information bottleneck framework as a way to move beyond the simplistic objective of information preservation towards more realistic objectives BID5. Another study has shown that by changing the objective from efficiently encoding the present to efficiently encoding the future (predictive coding), one could better account for the spatio-temporal RFs of V1 cells BID34. Although promising, these approaches were limited to the study of a single layer of neurons, and they did not answer the aforementioned questions about cross-layer or cross-species differences. On the other hand, deep convolutional networks have proven to be accurate models of the visual system, whether they are trained directly on reproducing neural activity BID28 BID4, or on a behaviorally relevant task BID37 BID14 BID4 ), but they have not yet been used to study the visual system through the lens of efficient coding theories. In this study, we trained deep convolutional neural networks on image recognition (CIFAR-10, BID23) and varied their architectures to explore the sets of constraints that could have shaped vertebrates' early visual representations through natural selection. We modeled the visual system with a series of two convolutional networks, one corresponding to the retina and one downstream network corresponding to the ventral visual system in the brain. By varying the architecture of these networks, we first found that a reduction in the number of neurons at the retinal output -corresponding to a realistic physical constraint on the number of fibers in the optic nerve -accounted simultaneously for the emergence of center-surround RFs in our model of the retina, and for the emergence of oriented receptive fields in the primary visual relay of the brain. Second, we found that the degree of neural resources allocated to visual cortices in our model drastically reshaped retinal representations. Given a deep visual cortex, the retinal processing emerged as quasi-linear and retained substantial information about the visual scene. In contrast, for a shallow cortex, the retinal processing emerged as nonlinear and more information-lossy, but was better at extracting features relevant to the object classification task. These observations make testable predictions on the qualitative differences that should be found in retinal representations across species, and could reconcile the seemingly incompatible theories of retinal processing as either performing efficient encoding or feature detection. The retinal architecture is strongly conserved across species BID27, and consists of three layers of feed-forward convolutional neurons (photoreceptors, bipolar cells, ganglion cells) and two layers of inhibitory interneurons (horizontal, amacrine cells). However, we chose to model the retina as a convolutional neural network BID24 with only two layers (fig. 1A). Indeed the retinal response of many species to complex stimuli has been modeled successfully with only one or two-layer models BID12 BID26 BID17, with some rare exceptions of models requiring more layers BID28. We refer to this network as the retina-net. In our simulations, we varied the number of neurons in the second layer of the retina-net, which is the output of the retina, corresponding to the physical bottleneck of the optic nerve conveying all the visual information to the brain (fig. 1B).We modeled the ventral visual system -the system associated with object recognition in the brain BID19 -as a convolutional neural network taking its inputs from the retina-net (fig. 1A). We varied the neural resources allocated to the ventral visual system network (VVS-net) by changing the number of layers it is composed of (fig. 1B).We trained the neural network composed of the retina-net and VVS-net end-to-end on an object classification task (CIFAR-10, fig. 1A -B-C). Even though the visual system does much more than just classify objects in natural images, this objective is already much more complex and biologically realistic than the one used in previous studies of efficient coding, namely preserving all information about the visual scene. Moreover, we are encouraged by the fact that previous studies using this objective have found a good agreement between neural activity in artificial and biological visual networks BID37 BID4.More specifically, we trained a convolutional neural network on a grayscale version of the standard CIFAR-10 dataset for image classification. The retina-net consisted of two convolutional layers with 32 channels and N BN channels respectively, and with ReLU nonlinearities at each layer. The VVSnet consisted of a varying number D V V S of convolutional layers with 32 channels followed by two fully connected layers (the first one with 1024 neurons and the second one with 10 neurons mapping to the 10 object categories), with ReLU nonlinearities at each layer and a softmax nonlinearity at the last layer. The full system encompassing the retina-net and VVS-net thus had 32 → N BN → 32 → 32 →... channels respectively, where we varied the retinal bottleneck width, N BN, as well as the number D V V S of convolutional brain layers (not counting the fully connected layers). In each convolutional layer, we used 9x9 convolutional filters with a stride of 1 at each step. The large filter size was chosen to give the network flexibility in determining the optimal filter arrangement. We trained our network with the RMSProp optimizer for 20 epochs on the training set with batches of size 32. All optimizations were performed using Keras and TensorFlow. For all presented, we Figure 1: Illustration of the framework we used to model early visual representations. A: We trained convolutional neural networks on an image recognition task (CIFAR-10). The networks were composed of two parts, a retina-net and a ventral-visual-system-net (VVS-net), which receives input from the retina-net. B: We varied the number of layers in the VVS-net (white boxes) and the number of channels at the output of the retina-net (blue box). C: Key : A bottleneck at the output of the retina yielded center-surround retinal RFs. A shallow VVS-net yielded more nonlinear retinal responses (linearity is schematized by the red arrow), which better disentangled image classes (represented as bent manifolds). D: Test-set accuracy of all model architectures on CIFAR-10, averaged over ten networks with random initial weights for each architecture. Performance increases with VVS-net depth and retinal channel, indicating that both factors are meaningful constraints on the network in the regime tested.tested statistical significance by training 10 identical networks with different random initializations of weights and biases taken from a Glorot-uniform distribution BID16.After training, we determined the linear approximation of RFs of each convolutional channel of the network in each layer. This was achieved by computing the gradient of the activation of that channel with respect to a blank image. This gradient map gives a first-order approximation of the image pattern that maximally activates the cells in the channel of interest. In the limit of small noise variance, this computation is mathematically equivalent to measuring the cell's spike-triggered average in response to a perturbative white-noise stimulus BID22 BID33, a commonly used method for determining receptive fields in experimental biology BID7. This equivalence allowed us to compare directly the geometries of RFs experimentally measured in biological networks with the ones found in our models. The test accuracy of our neural network model of the visual system at the recognition task increased both with the number of channels in the retinal bottleneck, and with the number of layers in the VVS-net (fig. 1D), confirming that we were in a regime where the restrictions on neural resources in the VVS-net and at the output of the retina were critical to the ability of the network to perform the task. Here we investigate the effects of a dimensionality bottleneck at the retinal output on early visual representations in our model of the visual system. When reducing the number of neurons at the output of the retina we found that RFs with antagonistic center and surround emerged. For N BN = 32, our control setting with no bottleneck at the retinal output, we observed mostly oriented receptive fields in the second layer of the network (FIG1). For N BN = 4, 2, and 1, we observed center-surround receptive fields in the second layer of the network and mostly oriented receptive fields in the third layer, which is the first layer of the ventral visual system in our model (FIG1). We quantified these in App. A. The RF geometries did not depend qualitatively on the VVS-net depth D V V S ( shown for D V V S = 2), except for the shallowest VVS-net tested (D V V S = 0, no convolutional layer, and thus no dimensionality expansion), for which the shape of emergent retinal RFs were variable across trials and difficult to interpret. These are in good agreement with the organization of the biological visual system, where retinal RFs are center-surround and most downstream RFs in primary visual cortex (V1) are sharply oriented BID19, suggesting that the dimensionality bottleneck at the output of the retina is sufficient to explain these differences in representations. It is worth noting that for both conditions (bottleneck and no bottleneck), the RFs of downstream layers in the VVS-net after the first layer exhibited complex shapes that were neither clearly oriented, nor circular, and the RFs in the first layer of the retina did not appear to have any well-defined structure (data not shown).We then tested in our model the hypothesis of Hubel and Wiesel concerning how center-surround cells are pooled to give rise to oriented RFs in V1 BID19. We found that orientation-selective neurons in the VVS-net typically draw primarily from center-surround neurons in the retina-net that are aligned with the direction of the edge, with positive or negative weights corresponding to whether the polarity (light-selective / dark-selective) of the two neurons are consistent or inconsistent (fig FIG1, and App. A for a quantification). These qualitative are in good agreement with Hubel and Wiesel's hypothesis. Of course, this hypothesis remains to be tested in the real brain, since there is no evidence that the micro-circuitry of the brain matches that of our simulation. In the visual system of mammals, the main relay of visual information taking its input from the retina is the LGN (thalamus), which has center-surround RFs and a similar total number of neurons as the retinal output BID19. We created a network reflecting this architecture by having two lowdimensionality layers in a row instead of just one (FIG1). After training, we found center-surround RFs in the two layers with a bottleneck (retinal output and LGN), and oriented RFs in the next layer, corresponding to the primary visual cortex (V1). These suggest that center-surround representations remain advantageous as long as the dimensionality of the representation remains low, and hence dimensionality expansion seems to be the crucial factor explaining the qualitative change of RFs found between LGN and V1.It is an interesting question to ask whether neurons in our model of the VVS are more similar to simple or complex cells BID19. To test this, we performed a one-step gradient ascent on the neural activity of VVS neurons with respect to the image, starting from several random initial images (App. B). If the neurons were acting as simple cells (i.e. are approximately linear in the stimulus), we would expect all optimized stimuli to converge to the same preferred stimulus. On the other hand, if the cells were complex (i.e. OR function between several preferred stimuli), we would expect the emergent preferred stimuli to depend on the exact initialization. Interestingly, we found that most neurons in the first layer of the VVS-net behaved as simple cells, whereas most neurons in the second layer of the VVS-net behaved as complex cells. Note that in biology, both simple and complex cells are found in V1. These expose the fact that anatomical regions of visual cortex involve multiple nonlinearities and hence may map onto more than one layer of our simple model. Indeed, V1 itself is a multilayered cortical column, with LGN inputs coming in to layer 4, and layer 4 projecting to layers 2 and 3 BID19. Simple cells are predominantly found in layer 4 and complex cells are predominantly found in layers 2 and 3. These observations bolster the interpretation that biological V1 may correspond to multiple layers in our model. Local divisive normalization (i.e. local gain control) is an ubiquitous source of nonlinearity in the visual system BID15 BID18 BID12. We thus tested the robustness of our main to a more realistic model of the visual system with local normalization, by adding it at every layer of the network (App. C). We found that receptive fields still emerged as center-surround in the retina-net, and as oriented in our model of V1. We note that the local normalization slightly degraded the performance of the network on the task for all parameter settings we tried. We then verified that the emergence of center-surround RFs in the retina-net is a consequence of reducing the number of neurons at the retinal output, not of reducing the number of channels, our model's equivalent of biological retinal cell types. In the retina, there exist 20-30 types of ganglion cells BID32, each with a different and stereotyped receptive field, density, polarity (i.e. ON or OFF), and nonlinearities. Cells of each type tile the entire visual field like a convolutional channel in our model, so there is a direct analogy between channels in our model and ganglion cell types in the retina. In order to test whether the emergence of center-surround RFs depends on the number of types that we allow, or just on the number of neurons that we allow at the output of the retina (i.e. dimensionality bottleneck), we employed locally connected layers -equivalent to convolutional layers, but without parameter-tying between artificial neurons within a channel at different spatial locations. In this manner, we can limit the number of neurons at the retinal output without imposing a constraint on the number of cell types. Such a network contains too many parameters to be trained from scratch by gradient descent; to work around this, we trained the model stage-wise by first training our convolutional control network (N BN = 32 with parameter tying) and then we trained a three-layers untied network (with bottleneck dimension N BN = 4 in the second layer) to reproduce the edge-like activations of the second layer of the control network. Even in the untied retina-net, in which each neuron is effectively its own channel, we found that centersurround RFs emerged (FIG1), indicating that center-surround RFs are the network's preferred strategy for passing information through a dimensionality bottleneck even when no constraint on the number of cell types is imposed. We then found that the cells cluster in two distinct populations. To demonstrate this, we measured their activations in response to 10000 natural images, computed the first 20 principal components of this 10000-dimensional space, and ran t-SNE to visualize the clustering of neuron types. We found that two distinct clusters emerged, that corresponded visually to ON and OFF center-surround RFs (FIG1). We thus observe in our model the emergence of one of the most prominent axes of dichotomy of biological ganglion cell types, namely the classification of cells in ON and OFF populations with RFs of opposite polarity. To what extent are retinal representations in our model shaped by the degree of neural resources allocated to downstream processing? To investigate this question, we studied the effects of varying the degree of neural resources in the VVS-net, on emergent visual representations in the retina-net. As we increased the number of layers in the VVS-net, the retinal computation became more linear (FIG3, as measured by the ability of the raw image to linearly map onto the neural representation at the retinal output (see methods, and App. F for a visualization of retinal representation as VVS-net depth increases). This observation is consistent with the current state of knowledge of the differences found in retinal representations across vertebrate species with different brain sizes. The linearization of the retinal response with increased brain complexity was true for different values of bottleneck N BN. However, when we did not use any bottleneck (N BN = 32), the trend became non-monotonic, with a peak in linearity of the response when the VVS-net had 1 conv layer (data not shown). Another interesting phenomenon to note is that linearity of the retinal response decreased as we increased the number of channels in the bottleneck, at any fixed brain depth (FIG3 .The two main sources of nonlinearity in the retina are thought to be the inner retinal rectifications (bipolar and amacrine cells, corresponding to the first rectified layer in our model) and the ganglion cell rectification (corresponding to the second rectified layer in our model). As we decreased VVSnet depth, we observed that the retinal response became more nonlinear. Is this increase in response nonlinearity due to the first or second stage of nonlinearity in our retina-net? To test this, we plotted the real response against the response predicted by a purely linear model for the most shallow and for the deepest VVS-nets tested (FIG3 . If the linear prediction were inaccurate because of the first stage of nonlinear processing in the retina-net, we would expect the points on the scatter plot to be scattered around the unit line. If the prediction error were due to the second-stage of nonlinearity, we would expect the linear approximation to make incorrect negative predictions for inactive neurons. In practice, we found that the prediction error of the linear model was partly explained by both stages of nonlinearity in the retina-net model, predicting that both inner retinal nonlinear processing and ganglion cell rectifications should be more pronounced in animals with fewer neural resources in their visual cortices. Why would retinal representations be more linear when the subsequent ventral visual stream has more resources? One hypothesis is that with a restricted number of neurons, the retina must trade-off between the two incentives of compressing visual information in order to transmit it to downstream layers and extracting nonlinear features from the scene to start disentangling the manifolds corresponding to different classes of objects BID8. According to this hypothesis, when the VVS is shallow, the priority of the retina should be to work toward extracting relevant features. When the VVS is deep, the priority of the retina should be to transmit as much visual information as possible for downstream processing. We validated this hypothesis in two ways in our model. Responses of example retina-net output cell to natural images, vs. best linear fit prediction from raw image, for most (top) and least (bottom) deep VVS-nets. Nonlinearity arises from two sources: rectification within the retina-net (corresponds to the spread of the bulk of the point cloud) and rectification at the retina-net output (corresponds to inactive neurons being incorrectly predicted to have negative activations). C: Quality of image reconstruction from the retinal representation as a function of VVS-net depth. The retinal representation retains more information about the raw image for deep VVS-nets. D: Linear separability of classes of objects at the retinal output, as a function of VVS-net depth. Dashed line indicates separability of classes of images from the raw image pixels. Classes are less separable at the retinal output for deeper VVS-nets. E: Performance on CIFAR-10 for a two-layer densely connected network taking its input from the retina-net or from a raw image. Class information is more accessible from retinal representation. F: Class separability at all layers of network for a deep VVS-net (D V V S = 4) with and without bottleneck (N BN = 1 and N BN = 32). Retinal representation of bottleneck network has low separability. However, the first layer of the VVS-net has high separability (see text).First we showed that the retinal representation retained more information about the image as VVSnet complexity increased (FIG3). To estimate information retention, we trained a linear decoder (see methods) from the output of the retina to reconstruct the image and we measured the reconstruction error. The reconstruction error provided a lower bound on the information that the retina retained about the stimulus (note that more information for reconstruction might be accessible by a nonlinear decoder). This corroborated our hypothesis that, as the VVS-net becomes more complex, the retinal representation gets better at retaining visual information for further processing by the VVS-net. Second, we found that different classes of objects of CIFAR-10 (e.g. trucks, frogs) were more linearly separable from the retina-net representation when the VVS-net was shallow than when it was deep (FIG3). To measure linear separability of manifolds, we trained a linear SVM decoder to separate all pairs of classes and evaluated the performance of the SVM classifier on held-out images (see methods). Moreover, we showed that a VVS-net consisting of two fully connected layers only (no convolutional layers) equipped and trained end-to-end with a retina with a tight bottleneck N BN = 1 (dimensionality of retinal output matches dimensionality of the input image) performed better at image recognition than the same VVS-net trained without a retina-net, taking raw images as input (FIG3). Both these corroborate our hypothesis that retinas followed by a simple cortex performs meaningful feature extraction, whereas retinas followed by more complex visual cortices prioritize non-lossy encoding, postponing feature extraction to downstream layers that are better equipped to do it. Next, we show that within a single network, each retinal channel is trading-off between linearly transmitting visual information to the brain, and extracting relevant features for the object classification task. For 10 instantiations of a network with a retinal bottleneck containing 4 channels, we represented the linearity of each of these 4 channels against the linear separability of object categories obtained from each of these representations. We found, across all networks, a systematic negative correlation between linearity and linear separability across all 4 channels (App. D). Again, this strongly suggests that extracting features and transmitting visual information are indeed two competing goals shaping representations in our model of the retina. In the case of the deepest VVS-nets tested, the retinal processing was quasi-linear for the tightest bottleneck (var.expl. = 0.9, N BN = 1, FIG3). One might take this to suggest that the retinanet in such models does little more than copy image information. However the very first layer of the VVS-net after the retina disentangled classes (as measured by linear separability) almost as well as the second layer of a VVS-net without a retina (FIG3, suggesting that the retinal representation, while only moderately linearly separable itself, is especially transformable into a representation with a high linear separability. This suggests that even when the retina-net is quasi-linear, it can still participate in extracting relevant features for downstream processing by the brain. The increased separability allowed by the retinal pre-processing for this deep VVS-net could be due to the linear processing or the slightly nonlinear part of the retinal processing a combination of both linear and nonlinear processing. To distinguish between these hypotheses, we replaced the true retinal processing by its best linear approximation, retrained the VVS-net on the output of this linearized retina, and tested whether separability was as high as with the true retinal processing (App. E). We found that the first layer trained on the output of the linearized retinal representation was indeed much more separable than the first layer of the control network (trained directly on natural images) at separating classes of objects, suggesting that the linear operation done by the retina does indeed play a crucial role in making the representation especially separable for subsequent layers. To estimate the linearity of the response of retinal neurons, we fit a linear model to predict the neural response from the image on 8,000 images. In order to prevent overfitting, we regularized the linear weights with an L2 penalty and optimized the weights using ridge regression. The value of the penalty term was chosen by 10-fold cross-validation on the training set. We then measured the Pearson correlation between the linearized responses and original model responses on a testing set of 2,000 images. To estimate the information about the input image retained by the retinal output representation, we fit a linear model to reconstruct the image from the (fixed) outputs of the trained retina-net of interest. All numerical figures given are variance-explained on the held-out test set. To estimate the linear separability of classes of objects from the neural representation, we trained an SVM classifier between all pairs of classes on half of the testing set of CIFAR-10 (1,000 images that were not used to train the network), and we tested the performance of the SVM classifier on 1,000 held-out images from the testing set, as measured by the percentage of images classified correctly. We then averaged the performance of the SVM across all pairs of classes to obtain the linear separability score. A unified theoretical account for the structural differences between the receptive field shapes of retinal neurons and V1 neurons has until now been beyond the reach of efficient coding theories. BID21 found that efficient encoding of images with added noise and a cost on firing rate produce center-surround RFs, whereas the same task without noise produces edge detectors. However, this observation (as they note) does not explain the discrepancy between retinal and cortical representations. BID36 propose a different set of constraints for the retina and V1, in which the retina optimizes for a metabolic constraint on total number of synapses, whereas V1 optimizes for a constraint on total firing rate. It is not clear why each of these constraints would predominate in each respective system. Here we show that these two representations can emerge from the requirement to perform a biologically relevant task (extracting object identity from an image) with a bottleneck constraint on the dimensionality of the retinal output. Interestingly, this constraint differs from the ones used previously to account for center-surround RFs (number of synapses or total firing rate). It is worth noting that we unsuccessfully tried to reproduce the of BID21 in our network, by adding noise to the image and applying an L1 regularization to the retina-net activations. In our framework (different than the one of BID21 in many ways), the receptive fields of the retina-net without bottleneck remained oriented across the full range of orders of magnitude of noise and L1 regularization that permitted successful task performance. There is a long-standing debate on whether the role of the retina is to extract relevant features from the environment BID25 BID17 BID32, or to efficiently encode all visual information indistinctly BID2 BID0 BID18. In this work, we show that our model of the visual system, trained on the same task and with the same input statistics, can exhibit different retinal representations depending on the degree of neural resources allocated to downstream processing by the ventral visual stream. These suggest the hypothesis that, despite its conserved structure across evolution, the retina could prioritize different computations in different species. In species with fewer brain resources devoted to visual processing, the retina should nonlinearly extract relevant features from the environment for object recognition, and in species with a more complex ventral visual stream, the retina should prioritize a linear and efficient transmission of visual information for further processing by the brain. Although all species contain a mix of quasi-linear and nonlinear cell types, the proportion of quasi-linear cells seems to vary across species. In the mouse, the most numerous cell type is a two-stage nonlinear feature detector, thought to detect overhead predators BID38. In contrast, the most common ganglion cell type in the primate retina is fairly well approximated by a linear filter (midget cells, 50% of all cells and >95% in the central retina BID32 BID11). Note however that two-stage nonlinear models are also present in larger species, such as cat Y-type cells and primate parasol cells BID10, making it difficult to make definitive statements about inter-species differences in retinal coding. To gain a better understanding of these differences, it would be useful to collect a dataset consisting of recordings of complete populations of ganglion cells of different species in response to a common bank of natural scenes. A related question is the role of the parcellation of visual information in many ganglion cell types at the retinal output. A recent theory of efficient coding has shown that properties of midget and parasol cells in the primate retina can emerge from the objective of faithfully encoding natural movies with a cost on the total firing rate traversing the optic nerve BID29. On the other hand, many cell types seem exquisitely sensitive to behaviorally relevant features, such as potential prey or predators BID17. For example, some cell types in the frog are tuned to detect moving flies or looming predators BID25. It is an intriguing possibility that different cell types could subserve different functions within a single species, namely efficient coding of natural scenes for some types and extraction of behaviorally-relevant features for others. In this study we allowed only a limited number of cell types (i.e. convolutional channels) at the retinal output (1 to 4), in order to have a dimensionality expansion between the retinal representation and the representation in the ventral visual stream (32 channels), an important condition to see the retinal center-surround representation emerge. By using larger networks with more channels in the retina-net and the VVS-net, we could study the emergence of a greater diversity of neuron types in our retina-net and compare their properties to real retinal cell types. It would also be interesting to extend our model to natural movies. Indeed, most feature detectors identified to date seem to process some form of image motion: wide-field, local or differential BID32. Adding a temporal dimension to the model would be necessary to study their emergence. In , by studying emergent representations learned by a deep network trained on a biologically relevant task, we found that striking differences in retinal and cortical representations of visual information could be a consequence of the anatomical constraint of transmitting visual information through a low-dimensional communication channel, the optic nerve. Moreover, our computational explorations suggest that the rich diversity of retinal representations found across species could have adaptively co-evolved with the varying sophistication of subsequent processing performed by the ventral visual stream. These insights illustrate how deep neural networks, whose creation was once inspired by the visual system, can now be used to shed light on the constraints and objectives that have driven the evolution of our visual system. The following analysis corroborates our qualitative observation that a dimensionality bottleneck in the retina-net yields center-surround retinal receptive fields and oriented, edge-detecting receptive fields in the first layer of the VVS-net (V1). For a given receptive field, we quantified its orientedness as follows: we displayed rectangular bar stimuli of all possible combinations of width, orientations and spatial translations that fit in the input image window. Among all these combinations, we selected the bar stimulus width, orientation, and translation that yielded the strongest response from the RF. Bars with the same width as the best stimuli were presented at all orientations and translations, and for each orientation, we select the strongest response it produced (across all translations). In this manner we obtained a measure of the strength of a receptive field's preference for all orientations. We measured the strength of each RF preference (maximum strength of response) for its preferred orientation and for the orthogonal orientation, and computed the ratio of these strengths. Completely isotropic filters would be expected to give a ratio of 1, while oriented filters should give higher ratios. Note however that some deviation from 1 may indicate noise in the filter rather than true orientedness. For each network layer, we averaged this ratio across filters (for convolutional layers with multiple layers) and trials (re-training of the same neural network architecture with different random initializations). We found that the average ratios were 1.56(±0.22) for the retinal output, 3.05(±0.30) for the first VVS-net layer, and 2.57(±0.27) for the second VVS-net layer, where error margins given are 95% confidence intervals. To help assess whether retinal RFs were more isotropic than expected by chance, we compared them to receptive fields composed of random Gaussian noise as a baseline. These give an average ratio (as computed above) of 1.97(±0.08), significantly higher than that for retinal RFs. Furthermore, the standard deviation of RF preference across orientations was significantly lower for the retinal RFs (0.118 ± 0.036) than for random RFs (0.177 ± 0.007), also indicating that retinal RFs were more isotropic than expected by chance. We also plot the average RF preference for different orientations at each layer to more comprehensively assess the isotropy of RFs at each network layer. To aggregate across multiple trials and filters, we rotated the coordinates of each receptive field such that its preferred orientation was vertical, and averaged our across filters and trials. (See FIG4 .The confirm our qualitative observations that RFs in the second layer of a vanilla network (N BN = 32) are highly oriented (FIG4) RFs in the second layer (retina output) of a bottleneck network (N BN = 1) are much more isotropic, consistent with center-surround RFs (FIG4 top), and RFs in the layer immediately following the retina-net in the bottleneck network are oriented (FIG4 .We also quantitatively corroborate our observation that oriented receptive fields in the V1 layer pool input from oriented arrays of center-surround filters in the retina-net output layer. We apply our method of isotropy quantification described above to the weight matrix for each input-output filter combination in the V1 convolutional layer. We find that this weight matrix itself exhibits orientedness across filters and trials, confirming our observation ( FIG4). To investigate whether neurons in our model's early layers more closely resembled simple or complex cells, we performed the following analysis. As before, we obtained local linear approximations of receptive fields by computing the gradient in input space with respect to the response of a given neuron. Rather than beginning with a blank input, we ran multiple trials with different randomly initialized inputs. A purely linear cell would give the same no matter the initialization; a somewhat nonlinear but still "simple" cell is expected to give similar across initializations. A "complex" cell is expected to give different RF visualizations for different random inputs, reflecting multiple peaks in its response as a function of input. In FIG5 we show examples of receptive fields at different layers of our retina-net + VVS-net model (with N BN = 1, D V V S = 2) for different random intializations of the image (uniform random in). The retina-net output and first VVS-net layer exhibit "simple" behavior, but the second VVS-net layer exhibits observably "complex" behavior. To quantify this effect, we measure the average (across filters within each layer and re-trainings of the same network architecture) standard deviation of computed RFs (normalized to the range) for each network layer. We found that the average standard deviations were 7.9(±1.1) × 10 −3, 15.4(±0.8) × 10 −3, and 35.9(±0.8) × 10 −3 for the retina-net output, first VVS-net layer, and second VVS-net layer, respectively, where the margins of error given are 95% confidence intervals. These corroborate the observation of significantly more complex behavior in the second VVS-net layer, mirroring the biological phenomenon in which complex cells pool from simple cells in V1. We tested the robustness of our first main finding -that a bottlenecked retina-net + VVS-net model yields center-surround receptive fields in the retina and oriented receptive felds in V1 -to the use of biologically realistic local response normalization at every layer of the network. In particular, we normalized the output x of each channel (row r, column c) of each layer as follows (during training and testing): Retinal representation of bottleneck network has low separability. However, the first layer of the VVS-net has high separability. We additionally plot the separability of the linearized bottleneck (NBN = 1) network (see test) as a function of layer. That the jump in linear separability between layers 2,3 survives linearization suggests that the main effect of retinal processing in this network is whitening rather than non-linear processing. Figure 8: Class separability at all layers of network for a deep VVS-net (D V V S = 4) with and without bottleneck (N BN = 1 and N BN = 32). Retinal representation of bottleneck network has low separability. However, the first layer of the VVS-net has high separability. We additionally plot the separability of the linearized bottleneck (N BN = 1) network (see test) as a function of layer. That the jump in linear separability between layers 2,3 survives linearization suggests that the main effect of retinal processing in this network is whitening (see Fig. 9) rather than nonlinear processing. In the case of the deepest VVS-nets tested, the retinal processing was quasi-linear for the tightest bottleneck (var.expl. = 0.9, N BN = 1, FIG3). However the very first layer of the VVS-net after the retina disentangled classes (as measured by linear separability) almost as well as the second layer of a VVS-net without retina (FIG3), suggesting that the retinal representation, while only moderately linearly separable itself, is especially transformable into a representation with a high linear separability. To determine to what degree this increased separability was due to the linear processing or the slightly nonlinear part of the retinal processing, we performed an ablation experiment to eliminate factor. We first replaced the true retinal processing by its best approximation by a onelayer linear convolution (of sufficient filter width to correspond to two convolutional layers with 9 by 9 filters). After this linearization process, we retrained the VVS-net using the linearized retinal representation as input, keeping the linearized retina weights frozen. We found that the first layer trained on the output of the linearized retinal representation was indeed much better than the first layer of the control network (trained directly on natural images) at separating classes of objects (Figure 9: Visualization of the output of the retina-net (one-channel-bottleneck, i.e. N BN =1) for different images from the testing set (x-axis) as a function of VVS-net depth (y-axis). Each pixel intensity of the retinal image is proportional to the activation of the corresponding neuron of the retina, where light shades indicate high activities and dark shades low activities. While retinas for every VVS-net depth appear to whiten the input, we can see that the retinal image is more and more processed and less and less recognizable as VVS-net depth decreases.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1xq3oR5tQ
We reproduced neural representations found in biological visual systems by simulating their neural resource constraints in a deep convolutional model.
While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian. We connect model generalization with the local property of a solution under the PAC-Bayes paradigm. In particular, we prove that model generalization ability is related to the Hessian, the higher-order "smoothness" terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters. Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly. Deep models have proven to work well in applications such as computer vision BID18 BID8 BID14, speech recognition, and natural language processing BID35 BID6 BID25. Many deep models have millions of parameters, which is more than the number of training samples, but the models still generalize well BID11.On the other hand, classical learning theory suggests the model generalization capability is closely related to the "complexity" of the hypothesis space, usually measured in terms of number of parameters, Rademacher complexity or VC-dimension. This seems to be a contradiction to the empirical observations that over-parameterized models generalize well on the test data 1. Indeed, even if the hypothesis space is complex, the final solution learned from a given training set may still be simple. This suggests the generalization capability of the model is also related to the property of the solution. BID15 and BID1 empirically observe that the generalization ability of a model is related to the spectrum of the Hessian matrix ∇ 2 L(w *) evaluated at the solution, and large eigenvalues of the ∇ 2 L(w *) often leads to poor model generalization. Also, BID15, BID1 and BID31 introduce several different metrics to measure the "sharpness" of the solution, and demonstrate the connection between the sharpness metric and the generalization empirically. BID2 later points out that most of the Hessian-based sharpness measures are problematic and cannot be applied directly to explain generalization. In particular, they show that the geometry of the parameters in RELU-MLP can be modified drastically by re-parameterization. Another line of work originates from Bayesian analysis. first introduced Taylor expansion to approximate the (log) posterior, and considered the second-order term, characterized by the Hessian of the loss function, as a way of evaluating the model simplicity, or "Occam factor". Recently BID34 use this factor to penalize sharp minima, and determine the optimal batch size. BID4 connect the PAC-Bayes bound and the Bayesian marginal likelihood when the loss is (bounded) negative log-likelihood, which leads to an alternative perspective on Occam's razor. BID19, and more recently, BID7 BID28 BID29 use PAC-Bayes bound to analyze the generalization behavior of the deep models. Since the PAC-Bayes bound holds uniformly for all "posteriors", it also holds for some particular "posterior", for example, the solution parameter perturbed with noise. This provides a natural The sharp minimum, even though it approximates the true label better, has some complex structures in its predicted labels, while the flat minimum seems to produce a simpler classification boundary. way to incorporate the local property of the solution into the generalization analysis. In particular, BID28 suggests to use the difference between the perturbed loss and the empirical loss as the sharpness metric. BID3 tries to optimize the PAC-Bayes bound instead for a better model generalization. Still some fundamental questions remain unanswered. In particular we are interested in the following question:How is model generalization related to local "smoothness" of a solution?In this paper we try to answer the question from the PAC-Bayes perspective. Under mild assumptions on the Hessian of the loss function, we prove the generalization error of the model is related to this Hessian, the Lipschitz constant of the Hessian, the scales of the parameters, as well as the number of training samples. The analysis also gives rise to a new metric for generalization. Based on this, we can approximately select an optimal perturbation level to aid generalization which interestingly turns out to be related to Hessian as well. Inspired by this observation, we propose a perturbation based algorithm that makes use of the estimation of the Hessian to improve model generalization. We consider the supervised learning in PAC-Bayes scenario BID24 BID22 BID23 BID20 ). Suppose we have a labeled data set S = DISPLAYFORM0 The PAC-Bayes paradigm assumes probability measures over the function class F: X → Y. In particular, it assumes a "posterior" distribution D f as well as a "prior" distribution π f over the function class F. We are interested in minimizing the expected loss, in terms of both the random draw of samples as well as the random draw of functions: DISPLAYFORM1 Correspondingly, the empirical loss in the PAC-Bayes paradigm is the expected loss over the draw of functions from the posterior: DISPLAYFORM2 PAC-Bayes theory suggests the gap between the expected loss and the empirical loss is bounded by a term that is related to the KL divergence between D f and π f BID20. In particular, if the function f is parameterized as f (w) with w ∈ W, when D w is perturbed around any w, we have the following PAC-Bayes bound BID32 ) BID33 BID28 BID29: DISPLAYFORM3, and π be any fixed distribution over the parameters W. For any δ > 0 and η > 0, with probability at least 1 − δ over the draw of n samples, for any w and any random perturbation u, DISPLAYFORM4 One may further optimize η to get a bound that scales approximately as BID33. 3 A nice property of the perturbation bound FORMULA4 is it connects the generalization with the local properties around the solution w through some perturbation u around w. In particular, supposeL(w *) is a local optimum, when the perturbation level of u is small, E u [L(w * + u)] tends to be small, but KL(w * + u||π) may be large since the posterior is too "focused" on a small neighboring area around w *, and vice versa. As a consequence, we may need to search for an "optimal" perturbation level for u so that the bound is minimized. DISPLAYFORM5 While some researchers have already discovered empirically the generalization ability of the models is related to the second order information around the local optima, to the best of our knowledge there is no work on how to connect the Hessian matrix ∇ 2L (w) with the model generalization rigorously. In this section we introduce the local smoothness assumption, as well as our main theorem. It may be unrealistic to assume global smoothness properties for the deep models. Usually the assumptions only hold in a small local neighborhood N eigh(w *) around a reference point w *. In this paper we define the neighborhood set as DISPLAYFORM0 + is the "radius" of the i-th coordinate. In our draft we focus on a particular type of radius κ i (w *) = γ|w * i | +, but our argument holds for other types of radius, too. In order to get a control of the deviation of the optimal solution we need to assume in N eigh γ, (w *), the empirical loss functionL in is Hessian Lipschitz, which is defined as: Definition 1 (Hessian Lipschitz). A twice differentiable function f (·) is ρ-Hessian Lipschitz if: DISPLAYFORM1 where · is the operator norm. The Hessian Lipschitz condition has been used in the numeric optimization community to model the smoothness of the second-order gradients BID27 BID0 BID13. In the rest of the draft we always assume the following: FORMULA2 is convex, and ρ-Hessian Lipschitz. DISPLAYFORM2 For the uniform perturbation, the following theorem holds: Theorem 2. Suppose the loss function l(f, x, y) ∈, and model weights are bounded |w i | + κ i (w) ≤ τ i ∀i. With probability at least 1 − δ over the draw of n samples, for anyw ∈ R m such that assumption 1 holds DISPLAYFORM3 Theorem 2 says if we choose the perturbation levels carefully, the expected loss of a uniformly perturbed model is controlled. The bound is related to the diagonal element of Hessian (logarithmic), the Lipschitz constant ρ of the Hessian (logarithmic), the neighborhood scales characterized by κ (logarithmic), the number of parameters m, and the number of samples n. Also roughly the perturbation level is inversely related to ∇ 2 i,iL, suggesting the model be perturbed more along the coordinates that are "flat". 4 Similar argument can be made on the truncated Gaussian perturbation, which is presented in Appendix B. In the next section we walk through some intuitions of our arguments. Suppose the empirical loss functionL(w) satisfies the local Hessian Lipschitz condition, then by Lemma 1 in BID27, the perturbation of the function around a fixed point can be bounded by terms up to the third-order, DISPLAYFORM0 For perturbations with zero expectation, i.e., E[u] = 0, the linear term in, E u [∇L(w) T u] = 0. Because the perturbation u i for different parameters are independent, the second order term can also be simplified, since FORMULA10 and assumption 1, it is straightforward to see the bound below holds with probability at least DISPLAYFORM1 DISPLAYFORM2 That is, the "posterior" distribution of the model parameters are uniform distribution, and the distribution supports vary for different parameters. We also assume the perturbed parameters are bounded, i.e., |w i | + κ i (w) ≤ τ i ∀i.5 If we choose the prior π to be u i ∼ U (−τ i, τ i), and then KL(w + u||π) = i log(τ i /σ i).The third order term in is bounded by DISPLAYFORM3 Unfortunately the bound in theorem 2 does not explain the over-parameterization phenomenon since when m n the right hand side explodes. 5 One may also assume the same τ for all parameters for a simpler argument. The proof procedure goes through in a similar way. DISPLAYFORM4 Solve for σ that minimizes the right hand side, and we have the following lemma: Lemma 3. Suppose the loss function l(f, x, y) ∈, and model weights are bounded |w i | + κ i (w) ≤ τ i ∀i. Given any δ > 0 and η > 0, with probability at least 1 − δ over the draw of n samples, for any w * ∈ R m such that assumption 1 holds, DISPLAYFORM5 where DISPLAYFORM6. uniformly perturbed random variables, and DISPLAYFORM7 In our experiment, we simply treat η as a hyper-parameter. Other other hand, one may further build a weighted grid over η and optimize for the best η BID33. That leads to Theorem 2. Details of the proof are presented in the Appendix C and D.5 ON THE RE-PARAMETERIZATION OF RELU-MLP BID2 points out the spectrum of ∇ 2L itself is not enough to determine the generalization power. In particular, for a multi-layer perceptron with RELU as the activation function, one may re-parameterize the model and scale the Hessian spectrum arbitrarily without affecting the model prediction and generalization when cross entropy (negative log likelihood) is used as the loss and w * is the "true" parameter of the sample distribution. In general our bound does not assume the loss to be the cross entropy. Also we do not assume the model is RELU-MLP. As a we would not expect our bound stays exactly the same during the re-parameterization. On the other hand, the optimal perturbation levels in our bound scales inversely when the parameters scale, so the bound only changes approximately with a speed of logarithmic factor. According to Lemma, if we use the optimal σ * on the right hand side of the bound, ∇ 2L (w), ρ, and w * are all behind the logarithmic function. As a consequence, for RELU-MLP, if we do the re-parameterization trick, the change of the bound is small. In the next two sections we introduce some heuristic-based approximations enlightened by the bound, as well as some interesting empirical observations. 6 AN APPROXIMATE GENERALIZATION METRIC AssumingL(w) is locally convex around w *, so that ∇ 2 i,iL (w *) ≥ 0 for all i. If we look at Lemma 3, for fixed m and n, the only relevant term is i log τi σ * i. Replacing the optimal σ *, and using |w i | + κ i (w) to approximate τ i, we come up with PAC-Bayes based Generalization metric, called pacGen, DISPLAYFORM8.6 Even though we assume the local convexity in our metric, in application we may calculate the metric on every points. When ∇ A self-explained toy example is displayed in FIG0. To calculate the metric on real-world data we need to estimate the diagonal elements of the Hessian ∇ 2L as well as the Lipschitz constant ρ of the Hessian. For efficiency concern we follow Adam and approximate ∇ To estimate ρ, we first estimate the Hessian of a randomly perturbed model ∇ (w + u), and then DISPLAYFORM0. For the neighborhood radius κ we use γ = 0.1 and = 0.1 for all the experiments in this section. We used the same model without dropout from the PyTorch example 7. Fixing the learning rate as 0.1, we vary the batch size for training. The gap between the test loss and the training loss, and the metric Ψ κ (L, w *) are plotted in Figure 2. We had the same observation as in BID15 ) that as the batch size grows, the gap between the test loss and the training loss tends to get larger. Our proposed metric Ψ κ (L, w *) also shows the exact same trend. Note we do not use LR annealing heuristics as in BID5 which enables large batch training. Similarly we also carry out experiment by fixing the training batch size as 256, and varying the learning rate. Figure 4 shows generalization gap and Ψ κ (L, w *) as a function of epochs. It is observed that as the learning rate decreases, the gap between the test loss and the training loss increases. And the proposed metric Ψ κ (L, w *) shows similar trend compared to the actual generalization gap. Similar trends can be observed if we run the same model on CIFAR-10 as shown in Figure 3 and Figure 5. Adding noise to the model for better generalization has proven successful both empirically and theoretically BID38 BID10 BID12 BID3 BID30. Instead of only minimizing the empirical loss, (Langford & Caruana, The right hand side of has E u [L(w + u)]. This suggests rather than minimizing the empirical lossL(w), we should optimize the perturbed empirical loss E u [L(w + u)] instead for a better model generalization power. We introduce a systematic way to perturb the model weights based on the PAC-Bayes bound. Again we use the same exponential smoothing technique as in Adam to estimate the Hessian ∇. The details of the algorithm is presented in Algorithm 1, where we treat η as a hyper-parameter. Even though in theoretical analysis E u [∇L · u] = 0, in applications, ∇L · u won't be zero especially when we only implement 1 trial of perturbation. On the other hand, if the gradient ∇L is close to zero, then the first order term can be ignored. As a consequence, in Algorithm 1 we only perturb the parameters that have small gradients whose absolute value is below β 2. For efficiency issues we used a per-parameter ρ i capturing the variation of the diagonal element of Hessian. Also we decrease the perturbation level with a log factor as the epoch increases. We compare the perturbed algorithm against the original optimization method on CIFAR-10, CIFAR-100 BID17, and Tiny ImageNet 8. The are shown in FIG6. We use the Wide-ResNet BID36 as the prediction model. 9 The depth of the chosen model is 58, and the widen-factor is set as 3. The dropout layers are turned off. For CIFAR-10 and CIFAR-100, we use Adam with a learning rate of 10 −4, and the batch size is 128. For the perturbation parameters we use η = 0.01, γ = 10, and =1e-5. For Tiny ImageNet, we use SGD with learning rate 10 −2, and the batch size is 200. For the perturbed SGD we set η = 100, γ = 1, Require: η, γ = 0.1, β 1 = 0.999, β 2 = 0.1, =1e-5. 1: Initialization: DISPLAYFORM0 for minibatch in one epoch do for all i do if t > 0 then 6: DISPLAYFORM0 g t+1 ← ∇ wLt (w t + u t) (get stochastic gradients w.r.t. perturbed loss) 11: DISPLAYFORM1 w t+1 ← OPT(w t) (update w using off-the-shell algorithms) 13: ImageNet. For CIFAR, Adam is used as the optimizer, and the learning rate is set as 10 −4. For the Tiny ImageNet, SGD is used as the optimizer, and the learning rate is set as 10 −2. The dropout method in the comparison uses 0.1 as the dropout rate. Details can be found in Appendix G. and =1e-5. Also we use the validation set as the test set for the Tiny ImageNet. We observe the effect with perturbation appears similar to regularization. With the perturbation, the accuracy on the training set tends to decrease, but the test on the validation set increases. The perturbedOPT also works better than dropout possibly due to the fact that the it puts different levels of perturbation on different parameters according to the local smoothness structures, while only one dropout rate is set for the all the parameters across the model for the dropout method. DISPLAYFORM2 We connect the smoothness of the solution with the model generalization in the PAC-Bayes framework. We prove that the generalization power of a model is related to the Hessian and the smoothness of the solution, the scales of the parameters, as well as the number of training samples. In particular, we prove that the best perturbation level scales roughly as the inverse of the square root of the Hessian, which mostly cancels out scaling effect in the re-parameterization suggested by BID2. To the best of our knowledge, this is the first work that integrate Hessian in the model generalization bound rigorously. It also roughly explains the effect of re-parameterization over the generalization. Based on our generalization bound, we propose a new metric to test the model generalization and a new perturbation algorithm that adjusts the perturbation levels according to the Hessian. Finally, we empirically demonstrate the effect of our algorithm is similar to a regularizer in its ability to attain better performance on unseen data. This section discusses the details of the toy example shown in FIG0. We construct a small 2-dimensional sample set from a mixture of 3 Gaussians, and then binarize the labels by thresholding them from the median value. The sample distribution is shown in FIG0. For the model we use a 5-layer MLP with sigmoid as the activation and cross entropy as the loss. There are no bias terms in the linear layers, and the weights are shared. For the shared 2-by-2 linear coefficient matrix, we treat two entries as constants and optimize the other 2 entries. In this way the whole model has only two free parameters w 1 and w 2. The model is trained using 100 samples. Fixing the samples, we plot the loss function with respect to the model variablesL(w 1, w 2), as shown in FIG0. Many local optima are observed even in this simple two-dimensional toy example. In particular: a sharp one, marked by the vertical green line, and a flat one, marked by the vertical red line. The colors on the loss surface display the values of the generalization metric scores (pacGen) defined in Section 6. Smaller metric value indicates better generalization power. As displayed in the figure, the metric score around the global optimum, indicated by the vertical green bar, is high, suggesting possible poor generalization capability as compared to the local optimum indicated by the red bar. We also plot a plane on the bottom of the figure. The color projected on the bottom plane indicates an approximated generalization bound, which considers both the loss and the generalization metric.10 The local optimum indicated by the red bar, though has a slightly higher loss, has a similar overall bound compared to the "sharp" global optimum. On the other hand, fixing the parameter w 1 and w 2, we may also plot the labels predicted by the model given the samples. Here we plot the prediction from both the sharp minimum FIG0 ) and the flat minimum FIG0. The sharp minimum, even though it approximates the true label better, has some complex structures in its predicted labels, while the flat minimum seems to produce a simpler classification boundary. Because the Gaussian distribution is not bounded but the inequality requires bounded perturbation, we first truncate the distribution. The procedure of truncation is similar to the proof in BID29 and BID24. DISPLAYFORM0 Now let's look at the event DISPLAYFORM1, by union bound P(E) ≥ 1/2. Here erf −1 is the inverse Gaussian error function defined as erf(x) = Suppose the coefficients are bounded such that i w 2 i ≤ τ, where τ is a constant. Choose the prior π as N (0, τ I), and we have DISPLAYFORM2 10 the bound was approximated with η = 39 using inequality Notice that after the truncation the variance only becomes smaller, so the bound of for the truncated Gaussian becomes DISPLAYFORM3 Again whenL(w) is convex around w * such that ∇ (w *) ≥ 0, solve for the best σ i and we get the following lemma: Lemma 4. Suppose the loss function l(f, x, y) ∈, and model weights are bounded i w 2 i ≤ τ. For any δ > 0 and η, with probability at least 1 − δ over the draw of n samples, for any w * ∈ R m such that assumption 1 holds, DISPLAYFORM0 DISPLAYFORM1. random variables distributed as truncated Gaussian, DISPLAYFORM2 and σ * 2 i is the i-th diagonal element in Σ *.Again we have an extra term η, which may be further optimized over a grid to get a tighter bound. In our algorithm we treat η as a hyper-parameter instead. C PROOF OF LEMMA 3Proof. We rewrite the inequality below DISPLAYFORM3 The terms related to σ i on the right hand side of are DISPLAYFORM4 Since the assumption is DISPLAYFORM5 Solving for σ that minimizes the right hand side of FORMULA2, and we have DISPLAYFORM6 The term DISPLAYFORM7 i on the right hand side of FORMULA14 is monotonically increasing w.r.t. σ 2, so DISPLAYFORM8 Combine the inequality, and the equation FORMULA2 with FORMULA2, and we complete the proof. Proof. Combining and FORMULA14, we get DISPLAYFORM0 The following proof is similar to the proof of Theorem 6 in BID33. Note the η in Lemma cannot depend on the data. In order to optimize η we need to build a grid of the form DISPLAYFORM1 For a given value of i log τǐ σi, we pick η j, such that j = 1 2 log i log F A LEMMA ABOUT EIGENVALUES OF HESSIAN AND GENERALIZATION By extrema of the Rayleigh quotient, the quadratic term on the right hand side of inequality FORMULA10 is further bounded by DISPLAYFORM2 This is consistent with the empirical observations of BID15 that the generalization ability of the model is related to the eigenvalues of ∇ 2L (w). The inequality still holds even if the perturbations u i and u j are correlated. We add another lemma about correlated perturbations below. Lemma 5. Suppose the loss function l(f, x, y) ∈. Let π be any distribution on the parameters that is independent from the data. Given δ > 0 η > 0, with probability at least 1 − δ over the draw of n samples, for any local optimal w * such that ∇L(w *) = 0,L(w) satisfies the local ρ-Hessian Lipschitz condition in N eigh κ (w *), and any random perturbation u, s.t., DISPLAYFORM3 Proof. The proof of the Lemma 5 is straightforward. Since ∇L(w *) = 0, the first order term is zero at the local optimal point even if E[u] = 0. By extrema of the Rayleigh quotient, the quadratic term on the right hand side of inequality FORMULA10 is further bounded by DISPLAYFORM4 Due to the linearity of the expected value, DISPLAYFORM5 which does not assume independence among the perturbations u i and u j for i = j. This section contains several figures comparing dropout and the proposed perturbation algorithm. Dropout can be viewed as multiplicative perturbation using Bernoulli distribution. It has already been widely used in almost every deep models. For comparison we present using the exact same wide resnet architectures except the dropout layers are turned on or off. We report the accuracy with dropout rate of 0.0, 0.1, 0.3, and 0.5 on CIFAR-10 and CIFAR-100. For Tiny ImageNet we report the with dropout rate being 0.0, 0.1, and 0.3. Again for the pertOPT algorithm all the dropout layers are turned off. The depth of the chosen wide resnet model BID36 is 58, and the widenfactor is set as 3. For CIFAR-10 and CIFAR-100, we use Adam with a learning rate of 10 −4, and the batch size is 128. For the perturbation parameters we use η = 0.01, γ = 10, and =1e-5. For Tiny ImageNet, we use SGD with learning rate 10 −2, and the batch size is 200. For the perturbed SGD we set η = 100, γ = 1, and =1e-5. Also we use the validation set as the test set for the Tiny ImageNet. Figure FORMULA14,, and show the accuracy versus epochs for training and validation in CIFAR-10, CIFAR-100, and Tiny ImageNet respectively. It is pretty clear that with added dropout the validation/test accuracy got boosted compared to the original method. For CIFAR-10, dropout rate 0.3 seems to work best compared to all the other dropout configurations. For CIFAR-100 and Tiny ImageNet, dropout 0.1 seems to work better. This may be due to the fact that CIFAR-10 has less training samples so more regularization is needed to prevent overfit. Although both perturbedOPT and dropout can be viewed as certain kind of regularization, in all experiments the perturbed algorithm shows better performance on the validation/test data sets compared to the dropout methods. One possible explanation is maybe the perturbed algorithm puts different levels of perturbation on different parameters according to the local smoothness structures, while only one dropout rate is set for all the parameters across the model.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJxOHs0cKm
a theory connecting Hessian of the solution and the generalization power of the model
Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function (unnormalized log-density) which is low for probable ones and high for improbable ones. Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution. Whereas the critic (or discriminator) in generative adversarial networks (GANs) learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection. This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain (MCMC) in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples. For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples. These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function. To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information. We find that in addition to producing a useful scoring function for anomaly detection, the ing approach produces sharp samples (like GANs) while covering the modes well, leading to high Inception and Fréchet scores. The early work on deep learning relied on unsupervised learning BID13 BID2 BID17 ) to train energy-based models BID18, in particular Restricted Boltzmann Machines, or RBMs. However, it turned out that training energy-based models without an analytic form for the normalization constant is very difficult, because of the challenge of estimating the gradient of the partition function, also known as the negative phase part of the log-likelihood gradient (described in more details below, Sec. 2). Several algorithms were proposed for this purpose, such as Contrastive Divergence BID12 and Stochastic Maximum Likelihood BID28 BID26, relying on Monte-Carlo Markov Chains (MCMC) to iteratively sample from the energy-based model. However, because they appear to suffer from either high bias or high variance (due to long mixing times), training of RBMs and other Boltzmann machines has not remained competitive after the introduction of variational auto-encoders BID16 ) and generative adversarial networks or GANs.In this paper, we revisit the question of training energy-based models, taking advantage of recent advances in GAN-related research, and propose a novel approach to training energy functions and sampling from them, called EnGAN. The main inspiration for the proposed solution is the earlier observation BID4 made on stacks of auto-encoders that sampling in latent space (and then applying a decoder to map back to data space) led to faster mixing and more efficient sampling. The authors observed that whereas the data manifold is generally very complex and curved, the corresponding distribution in latent space tends to be much simpler and flatter. This was verified visually by interpolating in latent space and projecting back to data space through the decoder, observing that the ing samples look like data samples (i.e., the latent space manifold is approximately convex, with most points interpolated between examples encoded in latent space also having high probability). We propose a related approach, EnGAN, which also provides two energy functions, one in data space and one in latent space. A key ingredient of the proposed approach is the need to regularize the generator (playing the role of the decoder in auto-encoders, but with no need for an encoder) so as to increase its entropy. This is needed to make sure to produce negative examples that can kill off spurious minima of the energy function. This need was first identified by BID15, who showed that in order for an approximate sampler to match the density associated with an energy function, a compromise must be reached between sampling low energy configurations and obtaining a high-entropy distribution. However, estimating and maximizing the entropy of a complex high-dimensional distribution is not trivial, and we take advantage for this purpose of very recently proposed GAN-based approaches for maximizing mutual information BID1 BID24, since the mutual information between the input and the output of the generator is equal to the entropy at the output of the generator. In this context, the main contributions of this paper are the following:• proposing EnGAN, a general architecture, sampling and training framework for energy functions, taking advantage of an estimator of mutual information between latent variables and generator output and approximating the negative phase samples with MCMC in latent space, • showing that the ing energy function can be successfully used for anomaly detection, improving on recently published with energy-based models, • showing that EnGAN produces sharp images -with competitive Inception and Frechet scores -and which also better cover modes than standard GANs and WGAN-GPs, while not suffering from the common blurriness issue of many maximum likelihood generative models. Let x denote a sample in the data space X and E θ: X → R an energy function corresponding to minus the logarithm of an unnormalized density density function DISPLAYFORM0 where Z θ:= e −E θ (x) dx is the partition function or normalizing constant of the density sample in the latent space. Let p D be the training distribution, from which the training set is drawn. Towards optimizing the parameters θ of the energy function, the maximum likelihood parameter gradient is DISPLAYFORM1 where the second term is the gradient of log Z θ, and the sum of the two expectations is zero when training has converged, with expected energy gradients in the positive phase (under the data p D) matching those under the negative phase (under p θ (x)). Training thus consists in matching the shape of two distributions: the positive phase distribution (associated with the data) and the negative phase distribution (where the model is free-running and generating configurations by itself). This observation has motivated the pre-GAN idea presented by BID3 that "model samples are negative examples" and a classifier could be used to learn an energy function if it separated the data distribution from the model's own samples. Shortly after introducing also made a similar connection, related to noise-contrastive estimation BID11. One should also recognize the similarity between Eq. 2 and the objective function for Wasserstein GANs or WGAN. In the next section, we examine a way to train what appears to be a particular form of WGAN that makes the discriminator compute an energy function. The main challenge in Eq. 2 is to obtain samples from the distribution p θ associated with the energy function E θ. Although having an energy function is convenient to obtain a score allowing to compare the relative probability of different x's, it is difficult to convert an energy function into a generative function. The commonly studied approaches for this are based on Monte-Carlo Markov chains, in which one iteratively updates a candidate configuration, until these configurations converge in distribution to the desired distribution p θ. For the RBM, the most commonly used algorithms have been Contrastive Divergence BID12 and Stochastic Maximum Likelihood BID28 BID26, relying on the particular structure of the RBM to perform Gibbs sampling. Although these MCMC-based methods are appealing, RBMs (and their deeper form, the deep Boltzmann machine) have not been competitive in recent years compared to autoregressive models BID27 ), variational auto-encoders and generative adversarial networks or GANs.What has been hypothesized as a reason for the poorer obtained with energy-based models trained with an MCMC estimator for the negative phase gradient is that running a Markov chain in data space is fundamentally difficult when the distribution is concentrated (e.g, near manifolds) and has many modes separated by vast areas of low probability. This mixing challenge is discussed by BID4 who argue that a Markov chain is very likely to produce only sequences of highly probable configurations. If two modes are far from each other and only local moves are possible (which is typically the case with MCMCs), it becomes exponentially unlikely to traverse the'desert' of low probability which can separate two modes. This makes mixing between modes difficult in high-dimensional spaces with strong concentration of probability mass in some places (e.g. corresponding to different categories) and very low probability elsewhere. In the same papers, the authors propose a heuristic method for jumping between modes, based on performing the random walk not in data space but in the latent space of an auto-encoder. Data samples can then be obtained by mapping the latent samples to data space via the decoder. They argue that auto-encoders tend to flatten the data distribution and bring the different modes closer to each other. The EnGAN sampling method proposed here is highly similar but leads to learning both an energy function in data space and one in latent space, from which we find that better samples are obtain. The energy function can be used to perform the appropriate Metropolis-Hastings rejection. Having an efficient way to approximately sample from the energy function also opens to the door to estimating the log-likelihood gradient with respect to the energy function according to Eq. 2, as outlined below. Turning a GAN discriminator into an energy function has been studied in the past BID15 BID31 BID6 but in order to turn a GAN discriminator into an energy function, a crucial and difficult requirement is the maximization of entropy at the output of the generator. Let's see why. In Eq. 2, we can replace the difficult to sample p θ by another generative process, say p G, such as the generative distribution associated with a GAN generator: DISPLAYFORM0 where Ω is a regularizer which we found necessary to avoid numerical problems in the scale (temperature) of the energy. In this paper we use a gradient norm regularizer BID10 ) DISPLAYFORM1 2 for this purpose. This is similar to the training objective of a WGAN as to Eq. 2, but this interpretation allows to train the energy function only to the extent that p G is sufficiently similar to p θ. To make them match, consider optimizing G to minimize the KL divergence KL(p G ||p θ), which can be rewritten in terms of minimizing the energy of the samples from the generator while maximizing the entropy at the output of the generator: DISPLAYFORM2 as already shown by BID15. When taking the gradient of KL(p G ||p θ) with respect to the parameters w of the generator, the partition function of p G disappears and we equivalently can optimize w to minimize DISPLAYFORM3 where p z is the prior distribution of the latent variable of the generator. In order to maximize the entropy at the output of the generator, we propose to exploit another GANderived framework in order to estimate and maximize mutual information between the input and output of the generator network. The entropy at the output of a deterministic function (the generator in our case) can be computed using an estimator of mutual information between the input and output of that function, since the conditional entropy term is 0 because the function is deterministic. With x = G(z) the function of interest: DISPLAYFORM4 Hence, any neural mutual information maximization method such as MINE BID1, noise constrastive estimation BID24 and DeepINFOMAX can be applied to estimate and maximize the entropy of the generator. All these estimators are based on training a discriminator which separates the joint distribution p(X, Z) from the product of the corresponding marginals p(X)p(Z). As proposed by BID5 in the context of using a discriminator to minimize statistical dependencies between the outputs of an encoder, the samples from the marginals can be obtained by creating negative examples pairing an X and a Z from different samples of the joint, e.g., by independently shuffling each column of a matrix holding a minibatch with one row per example. The training objective for the discriminator can be chosen in different ways. In this paper, we used the Deep INFOMAX (DIM) estimator, which is based on maximizing the Jensen-Shannon divergence between the joint and the marginal (see Nowozin et al. for the original F-GAN formulation). DISPLAYFORM5 where s+(a) = log(1+e a) is the softplus function. The discriminator T used to increase entropy at the output of the generator is trained by maximizing I JSD (X, Z) with respect to the parameters of T. With X = G(Z) the output of the generator, IJSD (G(Z), Z) is one of the terms to be minimized the objective function for training G, with the effect of maximizing the generator's output entropy H(G(Z)). The overall training objective for G is DISPLAYFORM6 where Z ∼ p z, the latent prior (typically a N (0, I) Gaussian). Figure 1: EnGAN model overview where G ω is the Generator network, T φ is the Statistics network used for MI estimation and E θ is the energy network One option to generate samples is simply to use the usual GAN approach of sampling a z ∼ p z from the latent prior and then output x = G(z), i.e., obtain a sample x ∼ p G. Since we have an energy function, another option is to run an MCMC in data space, and we have tried this with both Metropolis-Hastings (with a Gaussian proposal) and adjusted Langevin (detailed below, which does a gradient step down the energy and adds noise, then rejects high-energy samples). However, we have interestingly obtained the best samples by considering E θ • G as an energy function in latent space and running an adjusted Langevin in that space (compare Fig. 4 with Fig. 7.1). Then, in order to produce a data space sample, we apply G. For performing the MCMC sampling, we use the Metropolis-adjusted Langevin algorithm (MALA), with Langevin dynamics producing a proposal distribution in the latent space as follows: DISPLAYFORM0 Next, the proposedz t+1 is accepted or rejected using the Metropolis Hastings algorithm, by computing the acceptance ratio DISPLAYFORM1 and accepting (setting z t+1 =z t+1) with probability r. The overall training procedure for EnGAN is detailed in Algorithm 1, with MALA referring to the above procedure for sampling by MCMC, with n mcmc steps. When n mcmc =0, we recover the base case where z is only sampled from the prior and passed through G, and no MCMC is done to clean up the sample. Require: Score penalty coefficient λ, number of energy function updates n ϕ per generator updates, number of MCMC steps n mcmc, number of training iterations T, Adam hyperparameters α, β 1 and β 2. Require: Energy function E θ with parameters θ, entropy statistics network T φ with parameters φ, generator network G ω with parameters ω, minibatch size m for t = 1,..., T do for 1,..., n ϕ do Sample minibatch of real data {x,..., Sample minibatch of latent variables {z DISPLAYFORM0 Per-dimension shuffle of the minibatch z of latent variables, obtaining {z DISPLAYFORM1 The gradient-based updates can be performed with any gradient-based learning rule. We used Adam in our experiments. Generative models trained with maximum likelihood often suffer from the problem of spurious modes and excessive entropy of the trained distribution, where the model incorrectly assigns high probability mass to regions not present in the data manifold. Typical energy-based models such as RBMs suffer from this problem partly because of the poor approximation of the negative phase gradient, as discussed above. To check if EnGAN suffers from spurious modes, we train the energy-based model on synthetic 2D datasets (swissroll, 25gaussians and 8gaussians) similar to BID10 and visualize the energy function. From the probaility density plots on Figure 1, we can see that the energy model doesn't suffer from spurious modes and learns a sharp energy distribution. Bottom: Corresponding probabiltiy density visualizations. Density was estimated using a sample based approximation of the partition function. GANs have been notoriously known to have issues with mode collapse, by which certain modes of the data distribution are not at all represented by the generative model. Similar to the mode dropping issue that occurs in GANs, our generator is prone to mode dropping as well, since it is matched with the energy model's distribution using a reverse KL penalty D KL [P G || P E]. Although the entropy maximization term attempts to fix this issue by maximizing the entropy of the generator's distribution, it is important to verify this effect experimentally. For this purpose, we follow the same experimental setup as BID21 and BID25. We train our generative model on the StackedMNIST dataset, which is a synthetic dataset created by stacking MNIST on different channels. The number of modes can be counted using a pretrained MNIST classifier, and the KL divergence can be calculated empirically between the mode count distribution produced by the generative model and true data (assumed to be uniform). Table 1: Number of captured modes and Kullblack-Leibler divergence between the training and samples distributions for ALI BID7, Unrolled GAN BID21, Vee-GAN BID25, PacGAN BID20, WGAN-GP BID10. Numbers except our model and WGAN-GP are borrowed from BID1 Table 1, we can see that our model naturally covers all the modes in that data, without dropping a single mode. Apart from just representing all the modes of the data distribution, our model also better matches the data distribution as evidenced by the very low KL divergence scores as compared to the baseline WGAN-GP.We noticed empirically that modeling 10 3 modes was quite trivial for benchmark methods such as WGAN-GP BID10. Hence, we also try evaluating our model on a new dataset with 10 4 modes (4 stacks). The 4-StackedMNIST was created to have similar statistics to the original 3-StackedMNIST dataset. We randomly sample and fix 128 × 10 4 images to train the generative model and take 26 × 10 4 samples for evaluations. Generative models trained with maximum likelihood have often been found to produce more blurry samples. Our energy model is trained with maximum likelihood to match the data distribution and the generator is trained to match the energy model's distribution with a reverse KL penalty. To evaluate if our generator exhibits blurriness issues, we train our EnGAN model on the standard benchmark 32x32 CIFAR10 dataset for image modeling. We additionally train our models on the 64x64 cropped CelebA -celebrity faces dataset to report qualitative samples from our model. Similar to recent GAN works BID22, we report both Inception Score (IS) and Frchet Inception Distance (FID) scores on the CIFAR10 dataset and compare it with a competitive WGAN-GP baseline. From TAB1, we can see that in addition to learning an energy function, EnGAN trains generative model producing samples comparable to recent adversarial methods such as WGAN-GP BID10 widely known for producing samples of high perceptual quality. Additionally, we attach samples from the generator trained on the CelebA dataset and the 3-StackedMNIST dataset for qualitative inspection. As shown below in Fig. 4, the visual quality of the samples can be further improved by using the proposed MCMC sampler. Figure 3: Left: 64x64 samples from the CelebA dataset Right: 28x28 samples from the 3-StackedMNIST dataset. All samples are produced by the generator in a single step, without MCMC fine-tuning (see Fig. 4 for that). Apart from the usefulness of energy estimates for relative density estimation (up to the normalization constant), energy functions can also be useful to perform unsupervised anomaly detection. Unsupervised anomaly detection is a fundamental problem in machine learning, with critical applications in many areas, such as cybersecurity, complex system management, medical care, etc. Density estimation is at the core of anomaly detection since anomalies are data points residing in low probability density areas. We test the efficacy of our energy-based density model for anomaly detection using two popular benchmark datasets: KDDCUP and MNIST.KDDCUP We first test our generative model on the KDDCUP99 10 percent dataset from the UCI repository BID19.Our baseline for this task is Deep Structured Energy-based Model for Anomaly Detection (DSEBM) BID30, which trains deep energy models such as Convolutional and Recurrent EBMs using denoising score matching instead of maximum likelihood, for performing anomaly detection. We also report scores on the state of the art DAGMM BID32, which learns a Gaussian Mixture density model (GMM) over a low dimensional latent space produced by a deep autoencoder. We train our model on the KDD99 data and use the score norm ||∇ x E θ (x)|| 2 2 as the decision function, similar to BID30. BID32. Values for our model are derived from 5 runs. For each individual run, the metrics are averaged over the last 10 epochs. +0.1990 F1 score) and is comparable to the current SOTA model (DAGMM) specifically designed for anomaly detection. MNIST Next we evaluate our generative model on anomaly detection of high dimensional image data. We follow the same experiment setup as BID29 and make each digit class an anomaly and treat the remaining 9 digits as normal examples. We also use the area under the precision-recall curve (AUPRC) as the metric to compare models. Table 4: Performance on the unsupervised anomaly detection task on MNIST measured by area under precision recall curve. Numbers except ours are obtained from BID29. Results for our model are averaged over last 10 epochs to account for the variance in scores. Table 4, it can be seen that our energy model outperforms VAEs for outlier detection and is comparable to the SOTA BiGAN-based anomaly detection methods for this dataset BID29 which train bidirectional GANs to learn both an encoder and decoder (generator) simultaneously. An advantage with our method is that it has theoretical justification for the usage of energy function as a decision function, whereas the BiGAN-σ model lacks justification for using a combination of the reconstruction error in output space as well as the discriminator's cross entropy loss for the decision function. To show that the Metropolis Adjusted Langevin Algorithm (MALA) performed in latent space produced good samples in observed space, we attach samples from the beginning (with z sampled from a Gaussian) and end of the chain for visual inspection. From the attached samples, it can be seen that the MCMC sampler appears to perform a smooth walk on the image manifold, with the initial and final images only differing in a few latent attributes such as hairstyle, color, face orientation, etc. Figure 4: Left: Samples at the beginning of the chain (i.e. simply from the ordinary generator, z ∼ N (0, I)). Right: Generated samples after 100 iterations of MCMC using the MALA sampler. We see how the chain is smoothly walking on the image manifold and changing semantically meaningful and coherent aspects of the images. We proposed EnGAN, an energy-based generative model that produces energy estimates using an energy model and a generator that produces fast approximate samples. This takes advantage of novel methods to maximize the entropy at the output of the generator using a GAN-like technique. We have shown that our energy model learns good energy estimates using visualizations in toy 2D data and through performance in unsupervised anomaly detection. We have also shown that our generator produces samples of high perceptual quality by measuring Inception and Frchet scores and shown that EnGAN is robust to the respective weaknesses of GAN models (mode dropping) and maximumlikelihood energy-based models (spurious modes). We found that running an MCMC in latent space rather than in data space (by composing the generator and the data-space energy to obtain a latentspace energy) works substantially better than running the MCMC in data-space. 7.1 MCMC IN DATA SPACE Figure 5: Samples from the beginning, middle and end of the chain performing MCMC sampling in visible space. Initial sample is from the generator (p G) but degrades as we follow MALA directly in data space. Compare with samples obtained by running the chain in latent space and doing the MH rejection according to the data space energy (Fig. 4). It can be seen that MCMC in data space has poor mixing and gets attracted to spurious modes. For all experiments we use Adam as the optimizer with α = 0.0001, β 1 = 0.5, β 2 = 0.9. We used n mcmc = 0 (no MCMC steps during training) for all scores reported in the paper. Toy Data: The generator, energy-model and the statistics network are simple 3-hidden layer MLPs with dimensionality 512. The input to the statistics network is a conatenation of the inputs x and latents z. For these experiments, we use the energy norm co-efficient λ = 0.1StackedMNIST:: In line with previous work, we adopt the same architectural choices for the generator and energy-model / discriminator as VeeGAN BID25. The statistics network is modeled similar to the energy-model, except with the final MLP which now takes as input both the latents z and reduced feature representation of x produced by the CNN. For the CIFAR10 experiments, we adopt the same'Standard CNN' architecture as in SpectralNorm BID22. We adapt the architecture for the Statistics Network similar to the StackedMNIST experiments as mentioned above. For these experiments, we use the energy norm co-efficient λ = 10Anomaly Detection: For the KDD99 dataset, we adopt the same architecture as BID29. We noticed that using n ψ = 1 and λ = 10 5 worked best for these experiments. A large energy norm coefficient was specifically necessary since the energy model overfit to some artifacts in the data and exploded in value. For the MNIST anomaly detection experiments, we use the same architecture as the StackedMNIST experiments.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJlmhs05tm
We introduced entropy maximization to GANs, leading to a reinterpretation of the critic as an energy function.
Neural Style Transfer has become a popular technique for generating images of distinct artistic styles using convolutional neural networks. This recent success in image style transfer has raised the question of whether similar methods can be leveraged to alter the “style” of musical audio. In this work, we attempt long time-scale high-quality audio transfer and texture synthesis in the time-domain that captures harmonic, rhythmic, and timbral elements related to musical style, using examples that may have different lengths and musical keys. We demonstrate the ability to use randomly initialized convolutional neural networks to transfer these aspects of musical style from one piece onto another using 3 different representations of audio: the log-magnitude of the Short Time Fourier Transform (STFT), the Mel spectrogram, and the Constant-Q Transform spectrogram. We propose using these representations as a way of generating and modifying perceptually significant characteristics of musical audio content. We demonstrate each representation's shortcomings and advantages over others by carefully designing neural network structures that complement the nature of musical audio. Finally, we show that the most compelling “style” transfer examples make use of an ensemble of these representations to help capture the varying desired characteristics of audio signals. The problem we seek to explore in this paper is the transfer of artistic "style" from one musical audio example onto another. The definition and perception of an artistic style in visual art images (e.g., impressionist, pointilist, cubist) shown in Figure 1 is perhaps more straightforward than in the case musical audio. For images, a successful style transfer algorithm is capable of generating a novel image whose content information, or what is in the image, is matched as well as its stylistic information, or the artistic approach. In other words, it explores the question, "What would a rendering of scene A by artist B look like?" Figure 1: Demonstration of image style transfer courtesy of BID7.For our work, we similarly set out to develop an algorithm that explores the question, "What would it sound like if a musical piece by ensemble/artist A was performed by ensemble/artist B?" It should be noted that we do not approach the problem according to strict musicological definitions (e.g., melodic, harmonic, rhythmic, and structural elements), as one might proceed if given the musical notation of a composition. We do not presume access to the notation or any music theoretic analysis of a piece. We are instead interested in transferring the acoustic features related to harmonic, rhythmic, and timbral aspects of one musical piece onto another. Therefore, for the single instance "style" transfer algorithm we propose in this work, it is more accurate to pose the question as "What would a rendering of musical piece A (by artist A) using the harmonic and rhythmic patterns of piece B (by artist B) sound like?" In this paper, we define musical "style" transfer according to this type of audio content transformation, and will henceforth drop the use of quotation marks around "style". In texture generation, we instead ask "What would it sound like for a source musical piece to contain the same musical patterns and higher-order statistics without any of the same local, event-based information?" This can be achieved in the image or audio domain by only optimizing those terms of the loss function of a transfer algorithm associated with style, and not using any loss term associated with content. Currently, there are two types of approaches to image style transfer. The first method uses a learned generative model to manipulate the representation of the data such that it maintains its original content rendered into a new style. The second class of methods, which we investigate and apply in this paper, are concerned with synthesizing new data that matches the representations of data in a learned model in some specific way. Measuring the accuracy of such algorithms' abilities to transfer style is difficult, since most data is not able to be entirely disentangled into separate content and style components. This is especially true for musical style. There have been attempts for learning representations of musical style include the use of generative models which use a MIDI representation of audio BID14. The advantages of using this representation are the ability to focus solely on a highly understandable representation of musical information in its harmonic and rhythmic components, but lacks the ability to capture other important sonic information like timbre. Our approach utilizes many interesting findings from recent research in image style transfer. We suggest that it is possible to use the same style transfer algorithm used for images for musical audio, but best performance requires a careful selection of how content and style is represented, given the task. FIG0 shows a spectral visualization of how a style transfer contains both local, event based information from the content piece, while also having the characteristic nature of the style signal, as there is clearly more energy in the higher frequencies. However, it is important to note that despite this visualization in the log-magnitude STFT representation, the audio is ultimately synthesized in the time-domain. Original attempts at single image style transfer demonstrated successful transfer of artistic style between images through a gradient-based optimization of the convolutional map representations in the Visual Geometry Group (VGG) network BID16. This work focused on synthesizing preferred images whose layer representations in the VGG network, a convolutional neural network that rivals human performance on visual object recognition tasks, matching the likeliness of a content image while the statistics of the layer representations of a target style image are also matched. These statistics are typically represented by taking the inner product of the convolutional layer activations in order to compute the gram matrix. The ability to match each of these is expressed through separate content and style loss terms, which can both be used to optimize the pixels of a randomly initialized image. The motivation for using the gram matrix to represent style is that local information about the style image is lost, preserving only statistics about the convolutional representation over the entire image. Other work has helped explain the effectiveness of this approach by showing that single image style transfer can be re-written as a Maximum Mean Discrepancy problem with a 2nd-order polynomial distance kernel BID10. It was also shown that the need for a pre-trained network to perform style transfer was not at all necessary. In fact, the ability to recreate the content of the original image was improved when using randomly initialized weights BID6. Additionally, BID19 has shown that very shallow, untrained neural networks with as little as 1 layer are also capable of sufficiently representing artistic style in images. Ulyanov and Lebedev proposed an algorithm for the transfer of audio style using the log-magnitude STFT of audio with a single random convolutional layer BID17. After the novel log-magnitude spectrogram is generated, the Griffin-Lim algorithm BID5 is used to restore phase information before taking the inverse STFT to recover the time-domain signal. In order to generate these , only 1 convolutional layer with random, untrained weights was used with a rectified linear activation function. In order to make this model successful, 4096 different convolutional kernels must be used. Also, while spectrograms are typically thought of as F (Frequency bins) x T (Time bins) images with no color channels, this work represented them as 1xT images with F color channels. Therefore, the only convolution taking place is happening in time, while the patterns and relationships between frequencies are localized within the style loss. More recent advances in generative music models include WaveNet and SampleRNN, which also offer possibilities for style transfer BID20 BID12. These autoregressive models have successfully demonstrated the ability to generate long-term structure in audio signals. SampleRNN uses a multi-hierarchical recurrent neural network trained with Truncated Backpropogation Through Time (TBTT). WaveNet uses causal, dilated convolutions and residual skip-connections, and has the ability to be conditioned on vocal features which gives the ability to generate the same words and speech patterns with different voice characteristics, such as male or female BID21. There is also a recently proposed WaveNet Autoencoder BID2 ), a generative model for raw audio that can be used to manipulate a latent representation of an encoded sound and then regenerate it. In the previous attempt by BID17 at style transfer for audio, a simple convolutional layer with a ReLU non-linearity was used. A single 1-D convolutional layer, whose kernel has a shape that we denote as 1xKxF xN f was used. K represents the length of time bins convolved with the feature, while F and N f represent the number of frequency bins and filters. These weights are initialized using Glorot Initialization BID4. For our notation, we use x to denote the generated log-magnitude STFT, whose feature map, X is the convolved feature map over whichever dimension is being considered for style transfer or texture generation. Following this, we use s and c to denote the log-magnitude STFT representations for the style and content audio, respectively, whose feature maps are represented by S and C, respectively. The L 2 distance between the target audio and the content audio's feature maps summed over all spatial indices i and channels j is used for content loss expressed in Equation 1. DISPLAYFORM0 In representing style, it is ideal that local information about where musical events happen is lost, but information about how they happen in relation to each other is maintained. To represent the style, the inner-product, or gram matrix of the convolutional feature map is used. The inner-product of the vectorized feature maps X and S, denoted by W and G respectively, are calculated using Equations 2 and 3 for a filter i and another filter j is used to represent style. DISPLAYFORM1 The style loss, L Style, calculated as the sum of the L 2 distances between G and W over all pairs of filters i and j in N f, is given in Equation FORMULA2. DISPLAYFORM2 The total loss is represented by Equation 5, which uses parameters α and β to measure the importance of transferring each type of style. DISPLAYFORM3 All network weights are unaltered during optimization of the L. Only the log magnitude STFT representation of the target audio is adjusted. We introduce an extended version of the algorithm proposed in BID17 with different versions of two additional log-magnitude spectrogram representations described below in order to address the shortcomings of this algorithm as it applies to musical audio. In particular, BID17 characterizes and represents the timbral style of musical audio, which we can defined in this work as the short time envelope and harmonic statistics of the audio signal, but fails to capture information which is either rhythmic or harmonic. We propose using the Mel Spectrogram to better capture rhythmic information, and using the Constant Q Transform (CQT) Spectrogram in a 2-D convolutional neural network to represent harmonic style. While we use the Mel spectrogram to help represent rhythmic information, the true benefit of this representation is that the compressed channel axis allows for a deeper dilated network structure with a much longer receptive field in time. While this simple, single layer design is effective in representing short-term harmonic and timbral features of audio, its most obvious fault is its inability to effectively represent longer-term features of the audio signals, which we refer to as the rhythmic components. In order to increase the size of the kernel that spans the temporal dimension of a spectrogram, we need to also decrease the size of the filter dimension in, or the number of frequency channels. Without this, the size of our computation graph becomes very large, and the time needed for synthesis is greatly increased. We argue that rhythmic information is still preserved when compressing this frequency dimension. We propose using the same algorithm on a Mel scaled spectrogram of the log-magnitude STFT, which we refer to as the Mel spectrogram version of the signal, x M el. The Mel scale provides a mapping from the perceived Mel center frequency and the actual measured frequency, f, as shown in Equation 6 below. The mapping can be used to create a filter bank for projecting the magnitude STFT onto a perceptually optimal smaller number of channels. DISPLAYFORM0 Because the Mel spectrogram decreases spectral resolution of the STFT in a perceptually uniform manner, it has been a popular choice for state of the art neural networks trained on large corpuses of musical audio BID0 BID22. However, instead of using 2-D convolutions on the Mel spectrogram like this work, we propose treating the mel-frequency axis as a channel axis rather than a spatial axis like is done for the STFT representations. While a large number of filters in a convolutional kernel are still needed to represent style from this representation, the significantly reduced number of frequency bins means the number of our parameters in the kernel can be much smaller. In order to get a much larger receptive field of audio, we use a much longer kernel and a multi-tiered dilated non-causal convolutional structure modeled after the WaveNet auto-encoders BID2. This architecture makes use of dilated convolutions with residual, skip-connections in its encoding stage to vastly increase the receptive field. In this way, our model actually can be thought of as a generalization of this encoding model structure, but we have inserted the Mel spectrogram transformation in front of the initial layer which normally receives raw audio. While the original convolutional kernel used in BID17's implementation only had a receptive field of about 30 ms, this structure is capable of representing up to 4 seconds of audio with only 2 residual blocks for audio sampled at 22.05 kHz and N DF T = 2048. Additionally, since the dimensionality of the representation in the channel dimension is decreased by at least 2, we find that we can use a much smaller number for N f, which reduces the size of the computation graph needed for the same receptive field and decreases computation time. Following the WaveNet auto-encoder architecture, we only use dilated convolutions if more than one residual block is used, starting with a dilation rate of 2, and doubling again for each additional residual block. We compute the style loss from each layer in this neural network structure, and the content loss only using the last layer. In practice, we use between 16 (128:1 reduction) and 512 Mel filters (4:1 reduction) that are triangularly shaped and span from 0 Hz to 22.05 kHz. While the STFT and Mel spectrogram allow us to represent both short and long term musical statistics well, neither representation is capable of representing musically relevant frequency patterns, such as chords and intervals, along the frequency axis that can be shifted linearly. In order to achieve this, we need to use a 2-D convolution over a spectral representation. We have chosen the CQT spectrogram BID15 since it is capable of representing transposition in musical pitch as a simple shift along the (warped) frequency axis. In this way, an activation for a particular harmony at a note, n, should be close in value to the same harmonic structure being present at any other note within +/-6 semitones. Similarly to the Mel spectrogram, it has often been chosen as a representation for neural networks for this reason. While in some ways the CQT spectrogram is worse than the Mel spectrogram in terms of perceptual reconstruction quality due to the frequency scaling in lower frequencies, it is the most natural representation for 2-D convolutional kernels to use to represent joint time-frequency spatial features. We implement the "pseudo-CQT", which is obtained by projecting the magnitude STFT signal onto a wavelet filter bank with Constant Q frequency scaling, rather than the recursive sub-sampling method described in BID15, which has the benefit of modeling the transformation as a fully differentiable operation on the raw time-domain audio. In order to achieve key invariance for musical content, we use a total of 86 bins with 12 bins per octave spanning from the notes C1 to D8, and use max-pooling along the convolved frequency axis if key-invariance is desired. This allows features from the target audio to match the features of the content and style audio from multiple keys. As the width of the pooling increases, the representation becomes more key-invariant, and the distortion of the content representation increases. This filter bank is similarly logarithmic to the Mel filter bank in the higher frequency range (> 1kHz), but it oversamples the lower frequencies in comparison.3.3 PARALLEL ARCHITECTURE AND TIME DOMAIN SYNTHESIS We generalize the formulation described in section 3.1 for the content and style loss to yield 6 total possible loss terms (content and style for all 3 networks) to be used during optimization: DISPLAYFORM0 For each content loss, use the L 2 loss of the final layers' representations. For the style loss terms, we use the sum of all L 2 loss of the gram matrix representations of each layer in the network as described in Section 3.1. For the CQT network representations, the inner-product of both the temporal and frequency axes is used since both frequency and time invariance is desired. In practice, we make use of all possible terms, using up to 5 at once. Since there are up to 5 different objectives in our proposed solution, it can be difficult to discover the optimal weighting for a style transfer example. In order to alleviate this process, we propose initializing each loss term's coefficient such that the magnitude of the gradient is 1 for all loss terms as shown in Equation 7, inspired by BID10. We then control the scaling of the then normalized loss terms using the parameter, Γ T ype,N et, as shown in Equation 8. DISPLAYFORM1 The objective function to be minimized is the sum of all of these losses scaled by β T ype,N et as expressed in Equation 9. DISPLAYFORM2 The previous method computes the log-magnitude STFT as a pre-processing step and, after optimizing the , used the Griffin-Lim algorithm BID5 to reconstruct phase. We, instead, propose a method for estimating phase simultaneously with the optimization of the STFT representation. This transformation is fully differentiable to the raw time domain audio signal, meaning if overlapping time windows are used, phase information can be approximated. We use gradient descent based optimization algorithm Limited Memory BFGS (L-BFGS-B) for its superior performance in non-linear optimization problems including image style transfer. Recent work has shown that BFGS is not only capable of reconstructing phase in time-domain signals from magnitude STFT representations, but it converges faster and can reconstruct phase on some signals on which Griffin-Lim fails BID1. Because of this, we choose to model the transformation from the time-domain target input to the log-magnitude STFT as well as the Mel and CQT representations extracted as a projection of the target audio onto different bases as symbolic combinations of differentiable convolutional and dense layers in our computation graph. This allows us to optimize the time-domain audio from each spectral representation network's gradient in one stage without suffering major phase distortion. Another advantage of optimizing the target audio directly in the time domain is that it allows also for neural representations of the time-domain audio to be optimized simultaneously. We see this as being outside of the scope of the work being presented here, however. While each of these representations has complementary advantages and disadvantages, we can still use all 3 representations in parallel neural networks to capitalize upon the advantages from each. This is entirely possible through the ensemble representation of each network shown in FIG1 below. Prior work in style transfer proposed that higher layer neural representations of images captured the content more than style, where content is any information relevant to the classification task. We seek to create a similarly meaningful content representation, which maintains general information about musical events, without specifics like key and timbre. Key-invariance is desired since modern popular music is often composed without an emphasis on the specific key, and instead the harmony is commonly defined as its relation to the "one", which in Western music can be any of 12 musical keys. For these reasons, we argue that key-invariance is a crucial feature for a representation of musical content. We propose and test 3 methods for examining invariance to musical key between the content and style audio sources. For the first, we suggest using a Mel spectrogram content representation with only a few channels. This has the effect of taking a wide-band average of the audio signal, capturing the envelope of the audio signal. This type of representation captures only the rhythmic hits and the overall phonemes of words being sung. While prominent rhythmic and timbral shape information is preserved, all information about key is lost since the signal is being averaged over large regions of frequency. In For the second key-invariant content representation, we wish to use the convolved 2-D feature map of the CQT spectrogram. We choose a convolutional kernel that is 11x11, meaning it spans just under one octave at each stride. We use a max-pooling layer in order to make this content representation keyinvariant, and we use max-pooling along the convolved frequency axis of the convolutional feature map. This representation ensures that content that has been shifted in key, or shifted along the warped frequency axis of the CQT spectrogram, will yield a similar representation as the non-shifted version of the same signal. However, it is important to note that this max-pooling layer is only beneficial for making L Content,CQT key-invariant, since the L Style,CQT uses the inner product of the spatial dimension of the data, making it already key-invariant. In comparison with the key-invariant Mel spectrogram representation, this representation better preserves melodic and harmonic information without maintaining local key information. In practice we use a stride of 2 for the pooling, however, a greater stride could be used for increased invariance at the cost of greater information loss. It is important to note that pooling occurs on the convolutional layer versions of the representation, not the raw CQT itself. Pooling in this fashion would destroy harmonic information, while pooling on extracted patterns spanning enough frequency allows for these same patterns (i.e. a major third interval) to be placed at any fundamental frequency within the width of semi-tones defined by the pooling size. Finally, for the best approximation, we recommend combining both of these loss terms to encapsulate the information from each key-invariant representation. Previous work in image style transfer has shown that using instance normalization layers can improve quality of the synthesized images. Unfortunately, when we attempted to implement these for audio, these layers also seemed to introduce a noticeable amount of noise to the target audio. Instead, we use newly proposed self-normalizing non-linear activation functions, SeLUs BID8, in place of ReLUs BID13 since they have both desirable non-linear properties of activation functions as well as the property of allowing network activations to have 0 mean and unit variance with LeCun Normal initialization BID9. We find that using this activation function increases the quality of the synthesized audio, reduces convergence time, and makes finding an optimal weighting for the style and content terms more consistent from example to example. Since we don't use any type of dense neural network layers in our proposed networks, we demonstrate that it is possible to use different lengths of content and style audio with the same convolutional kernels and still obtain a valid style loss. To achieve this, we simply divide by the size of the convolutional feature map being used prior to computing the difference in the style loss calculation. We propose 3 experiments for showing the ability for our algorithm to improve the transfer of musical style. First, we show how using the Mel spectrogram representation of style is able to better capture the musical statistics of the audio through texture generation. We do so by examining what happens when we generate musical textures, meaning only style loss with no content. We also examine how the combined representation of the Mel and CQT representations of content offer a frequency invariant representation. Finally, we examine the quality of our algorithm for a variety of musical style transfer examples using both quantitative and qualitative assessment. For hyper-parameters, we find that using 4096 filters for the STFT network, 1025 for the Mel network, and 256 for the CQT network in each convolutional layer to be sufficient. We use kernel sizes of 1x11 (~25 ms), 1x50 (~1 second at first layer), and 11x11 (11 semitones x~25 ms) for the STFT, Mel, and CQT networks respectively as well. However, for residual layers in the Mel network, we use a 1x25 kernel. Since we use SELUs, we also use LeCun Normal weight initialization BID9.All experiments were conducted using a Tensorflow implementation of the ensemble model described in the previous section. We have released code used for the experiments. 1 Additionally, we also have a collection of musical style transfer examples, including all of the examples used for the shown below. While textures synthesized using prior methods exhibit a lack of rhythmic structure, we show the use of residual dilated non-causal convolutions on the Mel spectrogram works well for achieving a large receptive field and better capturing rhythmic statistics. In order to measure the rhythmic style similarity of a texture to its source audio, we compute the Kullback-Leibler (KL)-divergence of the Inter-Onset Interval Length Distributions. we detect onsets using Librosa's onset detector, which picks peaks based on the differences of energy in adjacent time frames of a spectrogram BID11. While increasing the receptive field to a much longer length improves this measure of rhythmic similarity, it is important to show that this is not introducing the effect of copying sections of the original source audio. In order to verify that this is not occurring, we also show that the maximum cross-correlation value between the time-domain audio waveforms are not significantly affected by the length of this field. We denote max(Φ XY) as the maximum cross-correlation value between two signals, X and Y. This helps justify that the source and texture are still significantly different, so that the texture isn't simply a distorted duplicate of the source. Additionally, we show that the mean local autocorrelation has more consistent hierarchical rhythmic structure at different lag times. Figure 4 summarizes these . In order to test the quality of key-invariance we suggest that it must be possible to use songs in two different keys for style and content, and be able to reconstruct the same content in the key of the style audio. We synthesized 13 different transpositions from a MIDI score of 4 second clips from "The Star Spangled Banner" and performed style transfer experiments where the original key version is used as Figure 4: Columns 1 and 3: comparison of inter-onset lengths distribution and KL divergence from the source distribution for a texture generation example as the effective receptive field increases in time. Columns 2 and 4: mean local autocorrelation plots showing the increase in hierarchical rhythmic structure of the audio without any significant increase in the maximum cross-correlation value, max(Φ XY). content, and the versions which are transposed +/-6 semitones are used for style. We used both the mean squared error of the log-magnitude STFT representation of the and the style used as a metric for content key-invariance. We choose this metric because the log-magnitude STFT contains most information about original time-domain audio signal, and we argue that being able to reconstruct the exact style signal should be possible if the content and style are the same, but in different keys. FIG3 shows the full for all experiments. Our confirm that changing key between style content has less of an effect on our proposed key-invariant content representations. The confirm that for all cases where the content and style pieces are in different keys, all of the proposed key-invariant representations have lower error than the normal STFT representations. While there is no clear best representation of these three proposed versions according to this quantitative metric, we observe that using both the CQT and M el representations captures different kinds of key-invariant information and combining these representations yields the best sounding . Samples from this experiment are included in the supplement. We tested the full style transfer algorithm with different combinations of loss terms and hyperparameters for a diverse set of examples. We found that there is not one set of loss terms that works best for all content-style transfer pairs, as the best set of loss terms is usually dependent on the nature of the specific style transfer task. However, we are able to demonstrate specific cases where using only the STFT loss terms fails, and significant improvement is achieved with the introduction of the other objectives based on perceptual audio representations. All examples use content and style with tempos within 10 beats per minute (bpm) of each other. We summarize our findings for a variety of loss content and style representation formulations in Table 1. Table 1: Summary of the effects of using different possible loss term formulations. We find that using both of our proposed content and style loss formulations improves the example quality from the simple STFT architecture proposed in BID17 We notice that using Mel spectrogram representations for achieving a larger receptive field in time greatly helps transfer style in certain cases where the long-term nature of the style audio is complex, and important to recognizing the nature of the musical style. Using both Mel and CQT representations for content greatly helps for cases where the nature of the content and style audio are very different from each other, even in cases where the content and style are in the same key. The abstracted content representation is better able to re-implement content with the musical attributes of the style audio without simply laying 2 different sounding examples over each other. This increase in performance is best displayed in examples of singing style transfer. Using a higher number of Mel channels and residual blocks will give the behavior similar to a "mash-up" of 2 audio examples, where portions of the style audio are placed over the content audio. Simply reducing the number of Mel channels eliminates this behavior so that the L Style,ST F T has high resolution frequency information over a short time span, representing the timbral and short-time harmonic features only, and L Style,M el uses a much longer time span with low frequency resolution, representing long-term musical attributes like rhythmic structure. When we use the Mel spectrogram content representation, we notice that the wide-frequency band nature of the synthesized audio sometimes has an undesirable, "breathy" quality to it. This problem is most evident when there are vocals. In order to restrain this effect, we add the L 1 penalty to the log-magnitude STFT to enforce the synthesized audio to have a sparse spectrum. We find that this helps eliminate the evenly distributed nature of the content part of the synthesized audio. We introduce several improvements for performing musical style transfer on raw audio through the utilization of multiple audio representations. Our contributions can be summarized as follows: First, we have demonstrated that using additional representations of Mel and CQT spectrograms with accompanying neural structure improve in many cases the capture of musically meaningful style information. Secondly, we have proposed a novel, key-invariant content representation for musical audio. Finally we have shown that despite using log-magnitude spectrograms to capture the content and style information, we are still able to synthesize a target audio waveform in the time domain using the backpropogation of the STFT.While our proposed content representations work for audio in different keys, there still is no representation for tempo invariance. Other future work may include using learned generative models to perform musical style transfer and trying to perform style transfer entirely in the time-domain. This or the use of complex weights may be able to help improve representation of phase information in neural representations.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
BybQ7zWCb
We present a long time-scale musical audio style transfer algorithm which synthesizes audio in the time-domain, but uses Time-Frequency representations of audio.
To communicate with new partners in new contexts, humans rapidly form new linguistic conventions. Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do. We introduce a repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently understand their partner over time. We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners. Linguistic communication depends critically on shared knowledge about the meanings of words BID9. However, the real-world demands of communication often require speakers and listeners to go beyond dictionary meanings to understand one another BID0 BID15. The social world continually presents new communicative challenges, and agents must continually coordinate on new meanings to meet them. For example, consider a nurse visiting a bed-ridden patient in a cluttered home. The first time they ask the nurse to retrieve a particular medication, the patient must painstakingly refer to unfamiliar pills, e.g. "the vasoprex-tecnoblek meds for my blood pressure, in a small bluish bottle, on the bookcase in my bathroom." After a week of care, however, they may just ask for their "Vasotec."This type of flexible language use poses a challenge for models of language in machine learning. Approaches based on deep neural networks typically learn a monolithic meaning function during training, with fixed weights during use. For an in-home robot to communicate as flexibly and efficiently with patients as a human nurse, it must be equipped with a continual learning mechanism. Such a mechanism would present two specific advantages for interaction and communication applications. First, to the extent that current models have difficulty communicating in a new setting, an adaptive approach can quickly improve performance on the relevant subset of language. Second, for human-robot contexts, an adaptive model enables speakers to communicate more efficiently as they build up common ground, remaining understandable while expending significantly fewer words as humans naturally do BID1.In this paper, we introduce a benchmark communication task and general continual learning framework for transforming neural language models into adaptive models that can be deployed in real-time interactions with other agents. Our key insight is that through continual interactions with the same partner in a shared context, an adaptive listener can more effectively communicate with its partner FIG0.We are motivated by hierarchical Bayesian approaches to task-specific adaptation. Our approach integrates two core components: (i) a loss function combining speaker and listener information, and (ii) a regularization scheme for fine-tuning model weights without overfitting. We begin by recasting communication as a multi-task problem for meta-learning. Each context and communicative partner can be regarded as a related but distinct task making its own demands on the agent's language model. To be effective across many such tasks, a communicative agent must both have a good prior representation they can use to understand novel partners and contexts, and have a mechanism to rapidly update this representation from a small number of interactions. As a benchmark for studying this problem, we introduce the repeated reference game task FIG0, which has been widely used in cognitive science to study partner-specific adaptation in communication BID8 BID1 BID18. In this task, a speaker agent and a listener agent are shown a context of images, C, and must collaborate on how to refer to them. On each trial, one of these images is privately designated as the target object o for the speaker. The speaker thus takes the pair (o, C) as input and returns an utterance u that will allow the listener to select the target. The listener agent then takes (u, C) as input and returns a softmax probability for each image, which it uses to make a selection. Both agents then receive feedback about the listener's response and the identity of the target. Critically, the sequence of trials is constructed so that each image repeatedly appears as the target, allowing us to evaluate how communication about each image changes over time. Before formalizing our algorithm as a generic update rule for neural networks, we describe the theoretical Bayesian foundations of our approach. At the core of any communication model is a notion of the semantics of language, which supplies the relationship between utterances and states of the world. Under a Bayesian approach, this representation is probabilistic: we represent some uncertainty over meanings. In a hierarchical Bayesian model, this uncertainty is structured over different partners and contexts. At the highest level of the hierarchy is a task-general variable Θ which parameterizes the agent's task-specific prior expectations P (θ i |Θ), where θ i represents the semantics used by a novel partner i. Given observations D i from communicative interactions in that context, an agent can update their task-specific model using Bayes rule: DISPLAYFORM0 The Bayesian formulation thus decomposes the problem of task-specific adaptation into two terms, a prior term P (θ i |Θ) and a likelihood term P (D i |θ i). The prior captures the idea that different language tasks share some task-general structure in common: in the absence of strong information about usage departing from this common structure, the agent ought to be regularized toward their task-general knowledge. The likelihood term accounts for needed deviations from general knowledge due to evidence from the current situation. The form of the likelihood depends on the task at hand. For our benchmark communication task, D i = {(u, o) t } contains paired observations of utterances u and their objects of reference o at times t. These data can be viewed from the point of view of a speaker (generating u given o) or a listener (choosing o from a context of options, given u) BID14 BID6. A speaker model 1 uses its task-specific semantics θ i to sample utterances u proportional to how well they apply to o: DISPLAYFORM1 A listener can be modeled as inverting this speaker model to evaluate how well an utterance u describes each object o relative to the others in a context C of objects by normalizing BID3 BID16; BID11: DISPLAYFORM2 Because these views of the data D i provide complementary statistical information about the task-specific semantics θ i, we will combine them in our loss. There is a deep theoretical connection between the hierarchical Bayesian framework presented in the previous section and recent deep learning approaches to multi-task learning BID12 BID5 BID7. Given a task-general initialization, regularized gradient descent on a particular task is equivalent to conditioning on new data under a Bayesian prior. We exploit this connection to propose an online continual learning scheme for a neural listener model that can adapt to a human speaker in our challenging referential communication task. Concretely, we consider an image-captioning network that combines a convolutional visual encoder (ResNet-152) with an LSTM decoder BID17. The LSTM takes a 300-dimensional embedding as input for each word in an utterance and its output is then linearly projected back to a softmax distribution over the vocabulary size. To pass the visual feature vector computed by the encoder into the decoder, we replaced the final layer of ResNet with a fullyconnected adapter layer. This layer was jointly pre-trained with the decoder on the COCO training set and then frozen, Algorithm 1 Update step for adaptive language model Input: θ t: weights at time t Output: θ t+1: updated weights Data: (u t, o t): observed utterance and object at time t for step do sample augmented batch of sub-utterances u ∼ P(u) update θ t ← θ t + β∇[P (u|o) + P (o|u) + reg(o, u)] end for leaving only the decoder weights (i.e. word embeddings, LSTM, and linear output layer) to be learned in an online fashion. Upon observing each utterance-object data point in the current task, we take a small number of gradient steps fine-tuning these weights to better account for the speaker's usage (see Algorithm 1). We consider several loss terms and techniques to do so. Speaker and listener likelihood. The primary signal available for adaptation is the (log-) probability of the new data under speaker and listener likelihoods given in Eqns. 2-3. Our speaker likelihood serves to make the observed utterance more likely for the target in isolation, while our listener likelihood makes it more likely relative to other objects in context. The speaker and listener likelihoods can be computed directly from the neural captioning model, as shown in FIG0, where the probability of each word is given by the softmax decoder output conditioned on the sentence so far. Regularization. We introduce two kinds of regularization terms to approximate the Bayesian prior on task-specific learning. First, rather than directly regularizing weights, a global KL regularization term minimizes the divergence between the captioning model's output probabilities before and after fine-tuning BID19 BID4. Since the support for our distribution of captions is infinite, we approximate the divergence incrementally by expanding from the maximum a posteriori (MAP) word at each step according to P, where P represents the model at initialization and Q t represents the model at time t. This loss is then averaged across random images from the full domain O, not just those in context: DISPLAYFORM0 where we denote the word at position i by w i and terminate after reaching L, the length of the MAP caption. A second form of regularization we consider is local rehearsal: we sum the likelihood over previous observations (u, o) τ from the same partner to prevent overfitting to the most recent observation. Finally, we examine listener variants of both forms of regularization by using the listener likelihood instead of the speaker likelihood. For example, we compute the listener KL regularization by comparing the initial listener distribution over the objects in context o ∈ C with the fine-tuned model's distribution: D KL (P (o|u)||Q t (o|u)). We anneal the weight on the listener regularization terms over time while reverse-annealing the listener likelihood. Data augmentation. A final component of our approach is a data augmentation step on the new utterance u. Ideally, an adaptive agent should learn that words and sub-phrases contained in the observed utterance are compositionally responsible for its meaning. We thus derive a small training dataset D(u) from u; for simplicity, we take the (ordered) powerset D(u) = P(u) of all sub-utterances. To evaluate our model, we implemented a repeated reference game using images from the validation set of COCO BID10 as the targets of reference. To construct challenging contexts C, we used our pre-trained visual encoder to find sets of highly similar images. We extracted feature vectors for each image, partitioned the images into 100 groups using a k-means algorithm, sampled one image from each cluster, and took its 3 nearest neighbors in feature space, yielding 100 unique contexts of 4 images each 3. We first investigated the baseline performance of human speakers and listeners. We recruited 113 participants from Amazon Mechanical Turk and automatically paired them into an interactive environment with a chatbox. For each of these 56 pairs, we sampled a context and constructed a sequence of 24 trials structured into 6 repetition blocks, where each of the 4 images appeared as the target once per block. We prevented the same target appearing twice in a row and scrambled the order of the images on each player's screen on each trial. We found that pairs of humans were remarkably accurate at this task, with performance near ceiling on every round. At the same time, they grew increasingly efficient in their communication: the utterance length decreased from an average of 7 words per image on the first repetition to only 3 words on the last. A mixed-effects regression with random slopes and intercepts accounting for variability at the pairand context-level found a significant decrease in utterance length across repetitions, t = −5.8, p < 0.001 (FIG2 . Next, we evaluated how our adaptive listener performed in real-time interaction with human speakers. We recruited 45 additional participants from Amazon Mechanical Turk who were told they would be paired with an artificial agent learning how they talk. This task was identical to the one performed by humans, except participants were only allowed to enter a single message through the chatbox on each trial. This message was sent to a GPU where the model weights from the previous trial were loaded, used to generate a response, and updated in real-time for the next round. The approximate latency for the model to respond was 6-8s. We used a batch size of 8, learning rate of 0.0005, and took 8 gradient steps after each trial. For our loss objective, we used a linear combination of all speaker and listener likelihood losses and regularization terms. We found that a listener based on a pre-trained neural captioning modelthe initialization for our adapting model-performs much less accurately than humans due to the challenging nature of the reference task. Yet our model rapidly improves in accuracy as it coordinates on appropriate meanings with human speakers. Similarly, while speakers did not simplify their utterances to the same extent as they did with other humans, perhaps due to early feedback about errors, they nonetheless became significantly more efficient over time, b = −19, t = −5 (see FIG2). We proceed to a series of lesion analyses that analyze the role played by each component of our approach. Fine-tuning repeatedly on a small number of data points presents a clear risk of catastrophic forgetting BID13, losing our ability to produce or understand utterances for other images. Our KL regularization term (Eqn. 4) was intended to play the same role as a Bayesian prior, preventing catastrophic forgetting by tethering task-specific behavior to the task-general model. To test the effectiveness of this term, we examined the likelihood of different captions before and after adaptation to the human baseline utterances. First, we sampled a random set of images from COCO that were not used in our experiment as control images, and used the initialized state of the LSTM to greedily generate a caption for each. We also generated initial captions for the target objects in context. We recorded the likelihood of all of these sampled captions under the model at the beginning and at each step of adaptation until the final round. Finally, we greedily generated an utterance for each target at the end and retrospectively evaluated its likelihood at earlier states. These likelihood curves are shown with and without speaker KL regularization in FIG2. The final caption becomes more likely in both cases; without the KL term, the initial captions for both targets and unrelated controls are (catastrophically) lost. We next simulated our adaptive agent's performance understanding utterances from the human baseline under lesioned losses FIG2. We found that rehearsal on previous rounds had the largest qualitative benefit, allowing for faster adaptation on early rounds, while data augmentation and the listener terms provided small boosts later in the game. Compared to a non-adapting baseline, however, even a simple loss only containing the speaker likelihood and speaker KL regularization performed better over time-successfully adapting to human language use. Human language use is flexible, continuously adapting to the needs of the current situation. In this paper, we introduced a challenging repeated reference game benchmark for artificial agents, which requires such adaptability to succeed. We proposed a continual learning approach that forms context-specific conventions by adapting general-purpose semantic knowledge. Even when models based on generalpurpose knowledge perform poorly, our approach allows human speakers working with adapted variants of such models to become more accurate and more efficient over time.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BklzE9Bo3V
We propose a repeated reference benchmark task and a regularized continual learning approach for adaptive communication with humans in unfamiliar domains
Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem. We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set. This can be used to construct a permutation-equivariant auto-encoder that avoids this responsibility problem. On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions and representations. Replacing the pooling function in existing set encoders with FSPool improves accuracy and convergence speed on a variety of datasets. Consider the following task: you have a dataset wherein each datapoint is a set of 2-d points that form the vertices of a regular polygon, and the goal is to learn an auto-encoder on this dataset. The only variable is the rotation of this polygon around the origin, with the number of points, size, and centre of it fixed. Because the inputs and outputs are sets, this problem has some unique challenges. Encoder: This turns the set of points into a latent space. The order of the elements in the set is irrelevant, so the feature vector the encoder produces should be invariant to permutations of the elements in the set. While there has been recent progress on learning such functions , they compress a set of any size down to a single feature vector in one step. This can be a significant bottleneck in what these functions can represent efficiently, particularly when relations between elements of the set need to be modeled (; b). Decoder: This turns the latent space back into a set. The elements in the target set have an arbitrary order, so a standard reconstruction loss cannot be used naïvely -the decoder would have to somehow output the elements in the same arbitrary order. Methods like those in therefore use an assignment mechanism to match up elements (section 2), after which a usual reconstruction loss can be computed. Surprisingly, their model is still unable to solve the polygon reconstruction task with close-to-zero reconstruction error, despite the apparent simplicity of the dataset. In this paper, we introduce a set pooling method for neural networks that addresses both the encoding bottleneck issue and the decoding failure issue. We make the following contributions: 1. We identify the responsibility problem (section 3). This is a fundamental issue with existing set prediction models that has not been considered in the literature before, explaining why these models struggle to model even the simple polygon dataset. 2. We introduce FSPOOL: a differentiable, sorting-based pooling method for variable-size sets (section 4). By using our pooling in the encoder of a set auto-encoder and inverting the sorting in the decoder, we can train it with the usual MSE loss for reconstruction without the need for an assignment-based loss. This avoids the responsibility problem. 3. We show that our auto-encoder can learn polygon reconstructions with close-to-zero error, which is not possible with existing set auto-encoders (subsection 6.1). This benefit transfers over to a set version of MNIST, where the quality of reconstruction and learned representation is improved (subsection 6.2). In further classification experiments on CLEVR (subsection 6.3) and several graph classification datasets (subsection 6.4), using FSPool in a set encoder improves over many non-trivial baselines. Lastly, we show that combining FSPool with Relation Networks significantly improves over standard Relation Networks in a model that heavily relies on the quality of the representation (subsection 6.5). The problem with predicting sets is that the output order of the elements is arbitrary, so computing an elementwise mean squared error does not make sense; there is no guarantee that the elements in the target set happen to be in the same order as they were generated. The existing solution around this problem is an assignment-based loss, which assigns each predicted element to its "closest" neighbour in the target set first, after which a traditional pairwise loss can be computed. We have a predicted setŶ with feature vectors as elements and a ground-truth set Y, and we want to measure how different the two sets are. These sets can be represented as matrices with the feature vectors placed in the columns in some arbitrary order, soŶ = [ŷ,...,ŷ (n) ] and Y = [y,..., y (n) ] with n as the set size (columns) and d as the number of features per element (rows). In this work, we assume that these two sets have the same size. The usual way to produceŶ is with a multi-layer perceptron (MLP) that has d × n outputs. Linear assignment One way to do this assignment is to find a linear assignment that minimises the total loss, which can be solved with the Hungarian algorithm in O(n 3) time. With Π as the space of all n-length permutations: Chamfer loss Alternatively, we can assign each element directly to the closest element in the target set. To ensure that all points in the target set are covered, a term is added to the loss wherein each element in the target set is also assigned to the closest element in the predicted set. This has O(n 2) time complexity and can be run efficiently on GPUs. Both of these losses are examples of permutation-invariant functions: the loss is the same regardless of how the columns of Y andŶ are permuted. It turns out that standard neural networks struggle with modeling symmetries that arise because there are n! different list representations of the same set, which we highlight here with an example. Suppose we want to train an auto-encoder on our polygon dataset and have a square (so a set of 4 points with the x-y coordinates as features) with some arbitrary initial rotation (see Figure 1). Each pair in the 8 outputs of the MLP decoder is responsible for producing one of the points in this square. We mark each such pair with a different colour in the figure. If we rotate the square (top left in figure) by 90 degrees (top right in figure), we simply permute the elements within the set. They are the same set, so they also encode to the same latent representation • • − Figure 1: Discontinuity (red arrow) when rotating the set of points. The coloured points denote which output of the network is responsible for which point. In the top path, the set rotated by 90 • is the same set (exactly the same shape before and after rotation) and encodes to the same feature vector, so the output responsibility (colouring) must be the same too. In this example, after 30 • and a further small clockwise rotation by, the point that each output pair is responsible for has to suddenly change. and decode to the same list representation. This means that each output is still responsible for producing the point at the same position after the rotation, i.e. the dark red output is still responsible for the top left point, the light red output is responsible for the top right point, etc. However, this also means that at some point during that 90 degree rotation (bottom path in figure), there must exist a discontinuous jump (red arrow in figure) in how the outputs are assigned. We know that the 90 degree rotation must start and end with the top left point being produced by the dark red output. Thus, we know that there is a rotation where all the outputs must simultaneously change which point they are responsible for, so that completing the rotation in the top left point being produced by the dark red output. Even though we change the set continuously, the list representation (MLP or RNN outputs) must change discontinuously. This is a challenge for neural networks to learn, since they can typically only model functions without discontinuous jumps. As we increase the number of vertices in the polygon (number of set elements), it must learn an increasing frequency of situations where all the outputs must discontinuously change at once, which becomes very difficult to model. Our experiment in subsection 6.1 confirms this. This example highlights a more general issue: whenever there are at least two set elements that can be smoothly interchanged, these discontinuities arise. We show this more formally in Appendix A. For example, the set of bounding boxes in object detection can be interchanged in much the same way as the points of our square here. An MLP or RNN that tries to generate these (like in ;) must handle which of its outputs is responsible for what element in a discontinuous way. Note that traditional object detectors like Faster R-CNN do not have this responsibility problem, because they do not treat object detection as a proper set prediction task with their anchor-based approach. The main idea behind our pooling method is simple: sorting each feature across the elements of the set and performing a weighted sum. The numerical sorting ensures the property of permutationinvariance. The difficulty lies in how to determine the weights for the weighted sum in a way that works for variable-sized sets. A key insight for auto-encoding is that we can store the permutation that the sorting applies in the encoder and apply the inverse of that permutation in the decoder. This allows the model to restore the arbitrary order of the set element so that it no longer needs an assignment-based loss for training. This avoids the problem in Figure 1, because rotating the square by 90 • also permutes the outputs of the network accordingly. Thus, there is no longer a discontinuity in the outputs during this rotation. In other words, we make the auto-encoder permutation-equivariant: permuting the input set also permutes the neural network's output in the same way. We describe the model for the simplest case of encoding fixed-size sets in subsection 4.1, extend it to variable-sized sets in subsection 4.2, then discuss how to use this in an auto-encoder in subsection 4.3. Figure 2: Overview of our FSPOOL model for variable-sized sets. In this example, the weights define piecewise linear functions with two pieces. The four dots on each line correspond to the positions where f is evaluated for a set of size four. We are given a set of n feature vectors where each x (i) is a column vector of dimension d placed in some arbitrary order in the columns of X ∈ R d×n. From this, the goal is to produce a single feature vector in a way that is invariant to permutation of the columns in the matrix. We first sort each of the d features across the elements of the set by numerically sorting within the rows of X to obtain the matrix of sorted features X: where X i,: is the ith row of X and SORT(·) sorts a vector in descending order. While this may appear strange since the columns of X no longer correspond to individual elements of the set, there are good reasons for this. A transformation (such as with an MLP) prior to the pooling can ensure that the features being sorted are mostly independent so that little information is lost by treating the features independently. Also, if we were to sort whole elements by one feature, there would be discontinuities whenever two elements swap order. This problem is avoided by our featurewise sorting. Efficient parallel implementations of SORT are available in Deep Learning frameworks such as PyTorch, which uses a bitonic sort (O(log 2 n) parallel time, O(n log 2 n) comparisons). While the permutation that the sorting applies is not differentiable, gradients can still be propagated pathwise according to this permutation in a similar way as for max pooling. Then, we apply a learnable weight matrix W ∈ R d×n to X by elementwise multiplying and summing over the columns (row-wise dot products). y ∈ R d is the final pooled representation of X. The weight vector allows different weightings of different ranks and is similar in spirit to the parametric version of the gather step in Gather-Excite . This is a generalisation of both max and sum pooling, since max pooling can be obtained with the weight vector [1, 0, . . ., 0] and sum pooling can be obtained with the 1 vector. Thus, it is also a maximally powerful pooling method for multi-sets while being potentially more flexible in what it can represent. When the size n of sets can vary, our previous weight matrix can no longer have a fixed number of columns. To deal with this, we define a continuous version of the weight vector in each row: we use a fixed number of weights to parametrise a piecewise linear function f: → R, also known as calibrator function . For a set of size three, this function would be evaluated at 0, 0.5, and 1 to determine the three weights for the weighted sum. For a set of size four, it would be evaluated at 0, 1/3, 2/3, and 1. This decouples the number of columns in the weight matrix from the set size that it processes, which allows it to be used for variable-sized sets. To parametrise a piecewise linear function f, we have a weight vectorw ∈ R k where k − 1 is the number of pieces defined by the k points. With the ratio r ∈, The max(·) term selects the two nearest points to r and linearly interpolates them. For example, if k = 3, choosing r ∈ [0, 0.5] interpolates between the first two points in the weight vector with (1 − 2r)w 1 + 2rw 2. We have a differentw for each of the d features and place them in the rows of a weight matrix W ∈ R d×k, which no longer depends on n. Using these rows with f to determine the weights: y is now the pooled representation with a potentially varying set size n as input. When n = k, this reduces back to Equation 4. For most experiments, we simply set k = 20 without tuning it. To create an auto-encoder, we need a decoder that turns the latent space back into a set. Analogously to image auto-encoders, we want this decoder to roughly perform the operations of the encoder in reverse. The FSPool in the encoder has two parts: sorting the features, and pooling the features. Thus, the FSUnpool version should "unpool" the features, and "unsort" the features. For the former, we define an unpooling version of Equation 6 that distributes information from one feature vector to a variable-size list of feature vectors. For the latter, the idea is to store the permutation of the sorting from the encoder and use the inverse of it in the decoder to unsort it. This allows the auto-encoder to restore the original ordering of set elements, which makes it permutation-equivariant. With y ∈ R d as the vector to be unpooled, we define the unpooling similarly to Equation 6 as In the non-autoencoder setting, the lack of differentiability of the permutation is not a problem due to the pathwise differentiability. However, in the auto-encoder setting we make use of the permutation in the decoder. While gradients can still be propagated through it, it introduces discontinuities whenever the sorting order in the encoder for a set changes, which we empirically observed to be a problem. To avoid this issue, we need the permutation that the sort produces to be differentiable. To achieve this, we use the recently proposed sorting networks , which is a continuous relaxation of numerical sorting. This gives us a differentiable approximation of a permutation matrix.., d} for each of the d features, which we can use in the decoder while still keeping the model fully differentiable. It comes with the trade-off of increased computation costs with O(n 2) time and space complexity, so we only use the relaxed sorting in the auto-encoder setting. It is possible to decay the temperature of the relaxed sort throughout training to 0, which allows the more efficient traditional sorting algorithm to be used at inference time. Lastly, we can use the inverse of the permutation from the encoder to restore the original order. where P T i permutes the elements of the ith row in X. Because the permutation is stored and used in the decoder, this makes our auto-encoder similar to a U-net architecture since it is possible for the network to skip the small latent space. Typically we find that this only starts to become a problem when d is too big, in which case it is possible to only use a subset of the P i in the decoder to counteract this. We are proposing a differentiable function that maps a set of feature vectors to a single feature vector. This has been studied in many works such as Deep Sets and PointNet , with universal approximation theorems being proven. In our notation, the Deep Sets model is g(j h(X :,j)) where h: Since this is O(n) in the set size n, it is clear that while it may be able to approximate any set function, problems that depend on higher-order interactions between different elements of the set will be difficult to model aside from pure memorisation. This explains the success of relation networks (RN), which simply perform this sum over all pairs of elements, and has been extended to higher orders by. Our work proposes an alternative operator to the sum that is intended to allow some relations between elements to be modeled through the sorting, while not incurring as large of a computational cost as the O(n 2) complexity of RNs. Sorting-based set functions The use of sorting has often been considered in the set learning literature due to its natural way of ensuring permutation-invariance. The typical approach is to sort elements of the set as units rather than our approach of sorting each feature individually. For example, the similarly-named SortPooling sorts the elements based on one feature of each element. However, this introduces discontinuities into the optimisation whenever two elements swap positions after the sort. For variable-sized sets, they simply truncate (which again adds discontinuities) or pad the sorted list to a fixed length and process this with a CNN, treating the sorted vectors as a sequence. and truncate to a fixed-size set by computing a score for each element and keeping elements with the top-k scores. In contrast, our pooling handles variable set sizes without discontinuities through the featurewise sort and continuous weight space. propose a graph auto-encoder where the decoder use the "inverse" of what the top-k operator does in the encoder, similar to our approach. Instead of numerically sorting, and Zhang et al. (2019b) learn an ordering of set elements instead. Outside of the set learning literature, rank-based pooling in a convolutional neural network has been used in , where the rank is turned into a weight. Sorting within a single feature vector has been used for modeling more powerful functions under a Lipschitz constraint for Wasserstein GANs and improved robustness to adversarial examples . Set prediction Assignment-based losses combined with an MLP or similar are a popular choice for various auto-encoding and generative tasks on point clouds (; ;). An interesting alternative approach is to perform the set generation sequentially (; ; ; . The difficulty lies in how to turn the set into one or multiple sequences, which these papers try to solve in different ways. Since the initial release of this paper, Zhang et al. (2019a) developed a set prediction method which uses FSPool as a core component and motivate their work by our observations about the responsibility problem. Interestingly, their model uses the gradient of the set encoder, which involves computing the gradient of FSPool; this is closely related to the FSUnpool we proposed. We start with two auto-encoder experiments, then move to tasks where we replace the pooling in an established model with FSPool. Full can be found in the appendices, experimental details can be found in Appendix H, and we provide our code for reproducibility at [redacted]. We start with our simple dataset of auto-encoding regular polygons (section 3), with each point in a set corresponding to the x-y coordinate of a vertex in that polygon. This dataset is designed to explicitly test whether the responsibility problem occurs in practice. We keep the set size the same within a training run and only vary the rotation. We try this with set sizes of increasing powers of 2. Model The encoder contains a 2-layer MLP applied to each set element, FSPool, and a 2-layer MLP to produce the latent space. The decoder contains a 2-layer MLP, FSUnpool, and a 2-layer MLP applied on each set element. We train this model to minimise the mean squared error. As baseline, we use a model where the decoder has been replaced with an MLP and train it with either the linear assignment or Chamfer loss (equivalent to AE-EMD and AE-CD models in). Results First, we verified that if the latent space is always zeroed out, the model with FSPool is unable to train, suggesting that the latent space is being used and is necessary. For our training runs with set sizes up to 128, our auto-encoder is able to reconstruct the point set close to perfectly (see Appendix B). Meanwhile, the baseline converges significantly slower with high reconstruction error when the number of points is 8 or fewer and outputs the same set irrespective of input above that, regardless of loss function. Even when significantly increasing the latent size, dimensionality of layers, tweaking the learning rate, and replacing FSPool in the encoder with sum, mean, or max, the baseline trained with the linear assignment or Chamfer loss fails completely at 16 points. We verified that for 4 points, the baseline shows the discontinuous jump behaviour in the outputs as we predict in Figure 1. This experiment highlights the difficulty of learning this simple dataset with traditional approaches due to the responsibility problem, while our model is able to fit this dataset with ease. Next, we turn to the harder task of auto-encoding MNIST images -turned into sets of points -using a denoising auto-encoder. Each pixel that is above the mean pixel level is considered to be part of the set with its x-y coordinates as feature, scaled to be within the range of. The set size varies between examples and is 133 on average. We add Gaussian noise to the points in the set and use the set without noise as training target for the denoising auto-encoder. Model We use exactly the same architecture as on the polygon dataset. As baseline models, we combine sum/mean/max pooling encoders with MLP/LSTM decoders and train with the Chamfer loss. This closely corresponds to the AE-CD approach with the MLP decoder and the model by with the LSTM decoder. We tried the approach by Zhang et al. (2019a), but it performs much worse than the other baselines, likely because it requires a bigger encoder (our encoder has ∼3000 parameters, their encoder has ∼85000 parameters). Results We show example outputs in Figure 3 and the full in Appendix C. We focus on comparing our FSPool-FSUnpool model against the best baseline, which uses the sum pooling encoder and MLP decoder. In general, our model can reconstruct the digits much better than the baseline, which tends to predict too few points even though it always has 342 (the maximum set size) times 2 outputs available. Occasionally, the baseline also makes big errors such as turning 5s into 8s (first σ = 0.01 example), which we have not observed with our model. Instead of auto-encoding MNIST sets, we can also classify them. We use the same dataset and replace the set decoder in our model and the baseline with a 2-layer MLP classifier. We consider three variants: using the trained auto-encoder weights for the encoder and freezing them, not freezing them (finetuning), and training all weights from random initialisation. This tests how informative the learned representations of the pre-trained auto-encoder and the encoder are. Results We show our for σ = 0.05 in Table 1. Results for σ = 0.00 and 100 epochs are shown in Appendix D. Even though our model can store information in the permutation that skips the latent space, our latent space contains more information to correctly classify a set, even when the weights are fixed. Our model with fixed encoder weights already performs better after 1 epoch of training than the baseline models with unfrozen weights after 10 epochs of training. This shows the benefit of the FSPool-FSUnpool auto-encoder to the representation. When allowing the encoder weights to change (Unfrozen and Random init), our again improve significantly over the baselines. Interestingly, switching the relaxed sort to the unrelaxed sort in our model when using the fixed auto-encoder weights does not hurt accuracy. Training the FSPool model takes 45 seconds per epoch on a GTX 1080 GPU, only slightly more than the baselines with 37 seconds per epoch. CLEVR is a visual question answering dataset where the task is to classify an answer to a question about an image. The images show scenes of 3D objects with different attributes, and the task is to answer reasoning questions such as "what size is the sphere that is left of the green thing". Since we are interested in sets, we use this dataset with the ground-truth state description -the set of objects (maximum size 10) and their attributes -as input instead of an image of the rendered scene. Model For this dataset, we compare against relation networks (RN) -explicitly modeling all pairwise relations -Janossy pooling , and regular pooling functions. While the original RN paper reports a of 96.4% for this dataset, we use a tuned implementation by with 2.6% better accuracy. For our model, we modify this to not operate on pairwise relations and replace the existing sum pooling with FSPool. We use the same hyperparameters for our model as the strong RN baseline without further tuning them. Results Over 10 runs, Table 2 shows that our FSPool model reaches the best accuracy and also reaches the listed accuracy milestones in fewer epochs than all baselines. The difference in accuracy is statistically significant (two-tailed t-tests against sum, mean, RN, all with p ≈ 0.01). Also, FSPool reaches 99% accuracy in 5.3 h, while the fastest baseline, mean pooling, reaches the same accuracy in 6.2 h. Surprisingly, RNs do not provide any benefit here, despite the hyperparameters being explicitly tuned for the RN model. We show some of the functions f (·,W) that FSPool has learned in Appendix E. These confirm that FSPool uses more complex functions than just sums or maximums, which allow it to capture more information about the set than other pooling functions. We perform a large number of experiments on various graph classification datasets from the TU repository: 4 graph datasets from bioinformatics (for example with the graph encoding the structure of a molecule) and 5 datasets from social networks (for example with the graph encoding connectivity between people who worked with each other). The task is to classify the whole graph into one of multiple classes such as positive or negative drug response. Model We use the state-of-the-art graph neural network GIN as baseline. This involves a series of graph convolutions (which includes aggregation of features from each node's set of neighbours into the node), a readout (which aggregates the set of all nodes into one feature vector), and a classification with an MLP. We replace the usual sum or mean pooling readout with FSPool k = 5 for our model. We repeat 10-fold cross-validation on each dataset 10 times and use the same hyperparameter ranges as for our model and the GIN baseline. Results We show the in Appendix F. On 6 out of 9 datasets, FSPool achieves better test accuracy. On a different 6 datasets, it converges to the best validation accuracy faster. A Wilcoxon signed-rank test shows that the difference in accuracy to the standard GIN has p ≈ 0.07 (W = 7) and the difference in convergence speed has p ≈ 0.11 (W = 9). Keep in mind that just because the have p > 0.05, it does not mean that the are invalid. Zhang et al. (2019a) build on the ideas in this paper to develop a model that can predict sets from an image. Their model requires from a set encoder that the more similar two set inputs are, the more similar their representations should be. This is harder than classification, because different inputs (of the same class) should no longer map to the same representation. In this experiment, we quantify the benefit of the RN + FSPool set encoder they used. We use their experimental set-up and replace FSPool with sum (this gives the normal RN model) or max pooling. We train this on CLEVR to predict the set of bounding boxes or the state description (this was the input in subsection 6.3). Results Appendix G shows that for both bounding box and state prediction models, the RN encoder using FSPool is much better than sum or max. This shows that it is possible to improve on standard Relation Networks simply by replacing the sum with FSPool when the task is challenging enough. In this paper, we identified the responsibility problem with existing approaches for predicting sets and introduced FSPool, which provides a way around this issue in auto-encoders. In experiments on two datasets of point clouds, we showed that this in much better reconstructions. We believe that this is an important step towards set prediction tasks with more complex set elements. However, because our decoder uses information from the encoder, it is not easily possible to turn it into a generative set model, which is the main limitation of our approach. Still, we find that using the auto-encoder to obtain better representations and pre-trained weights can be beneficial by itself. Our insights about the responsibility problem have already been successfully used to create a model without the limitations of our auto-encoder (a). In classification experiments, we also showed that simply replacing the pooling function in an existing model with FSPool can give us better and faster convergence. We showed that FSPool consistently learns better set representations at a relatively small computational cost, leading to improved in the downstream task. Our model thus has immediate applications in various types of set models that have traditionally used sum or max pooling. It would be useful to theoretically characterise what types of relations are more easily expressed by FSPool through an analysis like in. This may in further insights into how to learn better set representations efficiently. The following theorem is a more formal treatment of the responsibility problem ing in discontinuities. Theorem 1. For any set function f: n is the set of all sets of size n with elements in R d ) from a set of points S = {x 1, x 2, . . ., x n} to a list representation of that set L = [x σ, x σ,..., x σ(n) ] with some fixed permutation σ ∈ Π, there will be a discontinuity in f: there exists an ε > 0 such that for all δ > 0, there exist two sets S 1 and S 2 where: d s is a measure of the distance between two sets (e.g. Chamfer loss) and d l is the sum of Euclidean θ Figure 4: Example of the set with two points. Proof. We prove the theorem by considering mappings from a set of two points in two dimensions. For larger sets or sets with more dimensions, we can isolate two points and two dimensions and ignore the remaining points and dimensions. Let us consider the set of two points S(θ) = while for θ > θ *, the list representation will Let ε = 3.9 and δ be given. We can find a sufficiently small α > 0 so that The reason why this does not apply to our method is that rather than choosing a fixed σ for the list representation, the permutation-equivariance (instead of the invariance of set functions) allows our model to have L(π) = L. Results In Table 3, Table 4, and Table 5, we show the of various model and training loss combinations. We include a random baseline that outputs a polygon with the correct size and centre, but random rotation. These show that FSPool with the direct MSE training loss is clearly better than the baseline with either linear assignment or Chamfer loss on all the evaluation metrics. When the set size is 16 or greater, the other combinations only perform as well as the random baseline because they output the same constant set regardless of input. Results We show the for the default MNIST setting in Table 6. Interestingly, the sum pooling baseline has a lower Chamfer reconstruction error than our model, despite the example outputs in Figure 3 looking clearly worse. This demonstrates a weakness of the Chamfer loss. Our model avoids this weakness by being trained with a normal MSE loss (with the cost of a potentially higher Chamfer loss), which is not possible with the baselines. The sum pooling baseline has a better test Chamfer loss because it is trained to minimise it, but it is also solving an easier task, since it does not need to distinguish padding from non-padding elements. The main reason for this difference comes from the shortcoming of the Chamfer loss in distinguishing sets with duplicates or near-duplicates. For example, the Chamfer loss between [1, 1.001, 9] and [1, 9, 9.001] is close to 0. Most points in an MNIST set are quite close to many other points and there are many duplicate padding elements, so this problem with the Chamfer loss is certainly present on MNIST. That is why minimising MSE can lead to different with higher Chamfer loss than minimising Chamfer loss directly, even though the qualitative seem worse for the latter. We can make the comparison between our model and the baselines more similar by forcing the models to predict an additional "mask feature" for each set element. This takes the value 1 when the point is present (non-padding element) and 0 (padding element) when not. This setting is useful for tasks where the predicted set size matters, as it allows points at the coordinates to be distinguished from padding elements. These padding elements are necessary for efficient minibatch-wise training. The of this variant are shown in Table 7. Now, our model is clearly better: even though our auto-encoder minimises an MSE loss, the test Chamfer loss is also much better than all the baselines. Having to predict this additional mask feature does not affect our model predictions much because our model structure lets our model "know" which elements are padding elements, while this is much more challenging for the baselines. Table 9: Classification accuracy (mean ± stdev) on MNIST for 100 epochs over 6 runs. Results Table 8 and Table 9 show the for σ = 0.00 and for 100 epochs for both σ = 0.05 and σ = 0.00 respectively. Note that these are based on pre-trained models from the default MNIST setting without mask feature. Like before, the FSPool-based models are consistently superior to all the baselines. Note that while report an accuracy of ∼99% on a similar set version of MNIST, our model uses noisy sets as input and is much smaller and simpler: we have 3820 parameters, while their model has 1.6 million parameters. Our model also does not use dropout, batch norm, a branching network architecture, and a stepped learning rate schedule. When we try to match their model size, our accuracies for σ = 0.00 increase to ∼99% as well. E CLEVR Figure 5: Shapes of piecewise linear functions learned by the FSPool model on CLEVR. These show r ∈ on the x-axis and f (r,w) on the y-axis for a particularw of a fully-trained model. A common shape among these functions are variants of max pooling: close to 0 weight for most ranks and a large non-zero weight on either the maximum or the minimum value, for example in row 2 column 2. There are many functions that simple maximums or sums can not easily represent, such as a variant of max pooling with the values slightly below the max receiving a weight of the opposite sign (see row 1 column 1) or the shape in the penultimate row column 5. The functions shown here may have a stronger tendency towards 0 values than normal due to the use of weight decay on CLEVR. Experimental setup The datasets and node features used are the same as in GIN; we did not cherry-pick them. Because the social network datasets are purely structural without node features, a 70.0 ±0.9 47.8 ±0.9 --73.8 ±0.5 DIFFPOOL ----75.5 WL* 73.8 50.9 81.0 52.5 78.9 GIN-BASE* 75 67.0 61.3 56.6 62.6 PATCHY-SAN 92.6 ±4.2 75.9 ±2.8 60.0 ±4.8 78.6 ±1.9 SORTPOOL 85.8 ±1.7 75.5 ±0.9 58.6 ±2.5 74.4 ±0.5 DIFFPOOL -76.3 --WL 84.1 ±1.9 74.7 ±0.5 58.0 ±2.5 85.5 ±0.5 WL* 90.4 75.0 59.9 86.0 GIN-BASE* 89 constant 1 feature is used on the RDT datasets and the one-hot-encoded node degree is used on the other social network datasets. The hyperparameter sweep is done based on best validation accuracy for each fold in the cross-validation individually and over the same combinations as specified in GIN. Note that in GIN, hyperparameters are selected based on best test accuracy. This is a problem, because they consider the number of epochs a hyperparameter when accuracies tend to significantly vary between individual epochs. For example, our average on the PROTEINS dataset would change from 73.8% to 77.1% if we were to select based on best test accuracy, which would be better than their 76.2%. While we initially also used k = 20 in FSPool for this experiment, we found that k = 5 was consistently an improvement. The k = 20 model was still better than the baseline on average by a smaller margin. Results We show our of GIN-FSPool and the GIN baseline averaged over 10 repeats in Table 10. On the majority of datasets, FSPool has slightly better accuracies than the strong baseline and consistently takes fewer epochs to reach its highest validation accuracy. On the two RDT datasets, this improvement is large. Interestingly, these are the two datasets where the number of nodes to be pooled is by far the largest with an average of 400+ nodes per graph, compared to the next largest COLLAB with an average of 75 nodes. This is perhaps evidence that FSPool is helping to avoid the bottleneck problem of pooling a large set of feature vectors to a single feature vector. We emphasise that the main comparison to be made is between the GIN-Base and the GIN-FSPool model, since that is the only comparison where the only factor of difference is the pooling method. When comparing against other models, the network architecture, training hyperparameters, and evaluation methodology can differ significantly. Keep in mind that while GIN-Base looks much worse than the original GIN-Base*, the difference is that our implementation has hyperparameters properly selected by validation accuracy, while GINBase* selected them by test accuracy. If we were to select based on test accuracy, our implementation frequently outperforms their . Also, they only performed a single run of 10-fold crossvalidation. G DEEP SET PREDICTION NETWORKS Results Table 11 and Table 12 show that the FSPool-based RN encoder is much better than any of the baselines. The representation of DSPN-RN-FSPool is good enough that iterating the DSPN algorithm for more steps than the model was trained with can benefit the prediction, while for the baselines it generally just worsens. This is especially apparent for the harder dataset of state prediction, where more information has to be compressed into the latent space. We provide the code to reproduce all experiments at [redacted]. For almost all experiments, we used FSPool and the unpooling version of it with k = 20. We guessed this value without tuning, and we did not observe any major differences when we tried to change this on CLEVR to k = 5 and k = 40.W can be initialised in different ways, such as by sampling from a standard Gaussian. However, for the purposes of starting the model as similarly as possible to the sum pooling baseline on CLEVR and on the graph classification datasets, we initialiseW to a matrix of all 1s on them. The polygons are centred on 0 with a radius of 1. The points in the set are randomly permuted to remove any ordering in the set from the generation process that a model that is not permutationinvariant or permutation-equivariant could exploit. We use a batch size of 16 for all three models and train it for 10240 steps. We use the Adam optimiser with 0.001 learning rate and their suggested values for the other optimiser parameters (PyTorch defaults). Weights of linear and convolutional layers are initialised as suggested in. The size of every hidden layer is set to 16 and the latent space is set to 1 (it should only need to store the rotation as latent variable). We have also tried much hidden and latent space sizes of 128 when we tried to get better for the baselines. We train on the training set of MNIST for 10 epochs and the shown come from the test set of MNIST. For an image, the coordinate of a pixel is included if the pixel is above the mean pixel level of 0.1307 (with pixel levels ranging 0-1). Again, the order of the points are randomised. We did not include of the linear assignment loss because we did not get the model to converge to of similar quality to the direct MSE loss or Chamfer loss, and training time took too long (> 1 day) in order to find better parameters. The latent space is increased from 1 to 16 and the size of the hidden layers is increased from 16 to 32. All other hyperparameters are the the same as for the Polygons dataset. The architecture and hyperparameters come from the third-party open-source implementation available at https://github.com/mesnico/RelationNetworks-CLEVR. For the RN baseline, the set is first expanded into the set of all pairs by concatenating the 2 feature vectors of the pair for all pairs of elements in the set. For the Janossy Pooling baseline, we use the model configuration from that appeared best in their experiments, which uses π-SGD with an LSTM that has |h| as neighbourhood size. The question representation coming from the 256-unit LSTM, processing the question tokens in reverse with each token embedded into 32 dimensions, is concatenated to all elements in the set. Each element of this new set is first processed by a 4-layer MLP with 512 neurons in each layer and ReLU activations. The set of feature vectors is pooled with a pooling method like sum and the output of this is processed with a 3-layer MLP (hidden sizes 512, 1024, and number of answer classes) with ReLU activations. A dropout rate of 0.05 is applied before the last layer of this MLP. Adam is used with a starting learning rate of 0.000005, which doubles every 20 epochs until the maximum learning rate of 0.0005 is reached. Weight decay of 0.0001 is applied. The model is trained for 350 epochs. The GIN architecture starts with 5 sequential blocks of graph convolutions. Each block starts with summing the feature vector of each node's neighbours into the node's own feature vector. Then, an MLP is applied to the feature vectors of all the nodes individually. The details of this MLP were somewhat unclear in and we chose Linear-ReLU-BN-Linear-ReLU-BN in the end. We tried Linear-BN-ReLU-Linear-BN-ReLU as well, which gave us slightly worse validation for both the baseline and the FSPool version. The outputs of each of the 5 blocks are concatenated and pooled, either with a sum for the social network datasets, mean for the social network datasets (this is as specified in GIN), or with FSPool for both types of datasets. This is followed by BNLinear-ReLU-Dropout-Linear as classifier with a softmax output and cross-entropy loss. We used the torch-geometric library to implement this model. The starting learning rate for Adam is 0.01 and is reduced every 50 epochs. Weights are initialised as suggested in. The hyperparameters to choose from are: dropout ratio ∈ {0, 0.5}, batch size ∈ {32, 128}, if bioinformatics dataset hidden sizes of all layers ∈ {16, 32} and 500 epochs, if social network dataset the hidden size is 64 and 250 epochs. Due to GPU memory limitations we used a batch size of 100 instead of 128 for social network datasets. The best hyperparameters are selected based on best average validation accuracy across the 10-fold cross-validation, where one of the 9 training folds is used as validation set each time. In other words, within one 10-fold cross-validation run the hyperparameters used for the test set are the same, while across the 10 repeats of this with different seeds the best hyperparameters may differ. The architecture and hyperparameters come from the third-party open-source implementation available at https://github.com/Cyanogenoid/dspn. The only thing we change from this is replacing the pooling in the RN. All other hyperparameters are kept the same. The input image is encoded with a ResNet-34 with two additional convolutional layers with 512 filters and stride two to obtain a feature vector for the image. This feature vector is decoded into a set using the DSPN algorithm, which requires encoding an intermediate set with the set encoder and performing gradient descent on it. This set encoder creates all pairs of sets like in normal RNs, processes each pair with a 2-layer MLP with 512 neurons with one ReLU activation in the middle, then pools this into a feature vector. The intermediate set is updated with the gradient 10 times in training, but can be iterated a different amount in evaluation. The model is trained to minimise the linear assignment loss with the Adam optimiser for 100 epochs using a learning rate of 0.0003.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJgBA2VYwH
Sort in encoder and undo sorting in decoder to avoid responsibility problem in set auto-encoders
We present a method for policy learning to navigate indoor environments. We adopt a hierarchical policy approach, where two agents are trained to work in cohesion with one another to perform a complex navigation task. A Planner agent operates at a higher level and proposes sub-goals for an Executor agent. The Executor reports an embedding summary back to the Planner as additional side information at the end of its series of operations for the Planner's next sub-goal proposal. The end goal is generated by the environment and exposed to the Planner which then decides which set of sub-goals to propose to the Executor. We show that this Planner-Executor setup drastically increases the sample efficiency of our method over traditional single agent approaches, effectively mitigating the difficulty accompanying long series of actions with a sparse reward signal. On the challenging Habitat environment which requires navigating various realistic indoor environments, we demonstrate that our approach offers a significant improvement over prior work for navigation. The ability to model and understand the world at a high-level is crucial for performing complex tasks in real world environments. Part of this high-level understanding involves the ability to divide and plan out tasks that are complicated and have long time horizons into more manageable subtasks. For example, when navigating to a new location, we typically break the task down into a set of manageable directions (i.e. drive along a certain road until a familiar landmark before taking a turn). Imbuing machines with this ability of creating abstractions for long and complex tasks is an active area of research known as hierarchical learning (; 1999). Research for navigation has recently seen a rejuvenation due to the advent of learning-based approaches; ). Embodied learning-based approaches have shown some appealing properties over classical approaches such as being able to operate in complex environments with limited sensor data . However, there is a need for the ability to plan across long time horizons with sparse reward signals. This in effect, causes limitations such as the inability to overcome small obstacles when navigating towards a given goal and the requirement of invoking the environment a large number of times for any meaningful learning to occur . Works which have combined hierarchical reinforcement learning with imitation learning have shown promising (b;), by leveraging expert trajectories with policy sketches , which are less expensive to obtain; however these sketches still require annotation of the environment. In this work, we study such hierarchical control for the task of indoor navigation, whereby an embodied agent is randomly spawned within a novel and complex environment and must learn to navigate this environment through interaction (a). We address this challenging learning problem through a hierarchical policy approach, where two agents are cooperatively trained together. Each agent performs a different role, where one agent acts as a Planner, learning how to propose good sub-goals to an Executor agent, which acts at the low level to achieve these sub-goals (Fig. 1). In contrast to existing hierarchical policy learning approaches, communication between our two agents is two-way, where the Executor provides the Planner with a summary of its series of actions and recent observations. This aids the Planner in deciding the next sub-goal with additional side Figure 1: Our PLEX framework adopts a hierarchical policy approach, where a Planner proposes sub-goals for an Executor to act upon within an environment. The Planner receives an egocentric, top-down view with the target location and an embedding summary provided by the Executor. The Executor receives visual sensory data (i.e. colour and depth) as its input and a sub-goal provided by the Planner. Our method reduces the need for long-term planning and addresses the known sample inefficiency problem accompanying memory models within deep reinforcement learning approaches. information provided by the Executor. To this end, we propose PLEX, a planning and executing learning framework which offers the following contributions: • A hierarchical reinforcement learning approach where two agents specialise on different tasks but are jointly trained by sharing information • We demonstrate both theoretically and empirically that our method benefits from significantly improved sample efficiency as the time horizon is distributed between the Planner and Executor • By extension, our approach mitigates problems prevalent in long-horizon planning, especially those adopting LSTM planning approaches Hierarchical Reinforcement Learning The application of hierarchical reinforcement learning (; 1999;) in real world settings has allowed deep reinforcement learning approaches to scale with the increasingly higher sample requirements that are required by complex real world environments (b; a). Early works considered an options-based approach, where there was an underlying assumption of the existence of some useful set of options which are fully defined beforehand; this allowed learning to occur at a higher level in terms of those set of options (; 1999). , a hierarchical approach was adopted without the use of imitation learning. However, the work was limited to exploring non-realistic environments and information flow was uni-directional from master to controller. proposed sub-goals which have semantic meaning; although this demanded a high-level of supervision where rich annotations on a given environment were required. The work of showed that using a hierarchical learning approach can lead to a reduction in the cost of exploration on problems with sparse rewards under the limited context of game environments. Das et al. (2018b) explored the embodied question answering task (a), leveraging heuristics embedded in the environment for providing expert trajectories used in the imitation learning initialisation stage. This method is heavily supervised and assumes that full semantic information of the environment is known. Additionally, the approach of Das et al. (2018b) is limited to a specific set of sub-goals, for example: Exit-room, Find-object, etc., imposing limitations on the expressiveness of the master policy. By contrast, our method allows the Planner to propose sub-goals which directly relate to the given environment (i.e. a continuous point-goal vector) and does not rely on external annotations for supervision. Embodied Agent Learning There has been a recent surge of interest towards embodied agent learning where an agent is personified and spawned within an environment and set to complete a certain task. Such tasks may include question answering (a), point goal navigation , roaming and scene exploration or coverage . This problem is purposely set up in a way which allows for relatively easy transfer from simulation environment to a physical robotic platform operating in a real world environment. Various simulation environments have been designed around this (; ; ; ;), which aim to provide a realistic indoor simulation for training embodied agents on the aforementioned tasks. These environments allow an agent to invoke the environment numerous times, and in effect, scale the sample hungry nature of the problem to an extent where agents perform at a reasonable level (; a). trained an agent using the Proximal Policy Optimisation algorithm , augmenting it with a memory unit . Although this provided a strong initial baseline, achieving this required invoking an environment for millions of steps. The recent work of attempted to address this by proposing a memory component which makes use of the transformer network . Although this approach mitigated a key weakness in LSTM memory units , it incurs with it a linear growth of memory usage which is addressed via memory compression and hence increases the computational complexity of the model. The aforementioned issues are not present in our framework since the Executor's rollout is strictly shorter as we will soon demonstrate. 1 2. The sum of these i.i.d variables is S N = N i=1 X i. As we increase the rollout length N, the agent's probability of exploring decreases exponentially and is given by the following bound: Intuitively and as shown theoretically, reducing the rollout length N has a positive impact on the required exploration (for a more general form of this proposition, we refer the interested reader to). One way to exploit this insight is through a hierarchical approach which allows setting sub-goals that are closer in state space to the agent's current state. Given this insight, we now provide details of our hierarchical approach which consists of two main components: a Planner and an Executor policy. Each of these components is treated as an independent agent with respective policies π θ P L and π θ EX. A high-level overview of our framework is shown in Fig. 2. The environment is denoted as P, from which states s are sampled. A reward function R provides the reward conditioned on a state s, an action a and a goal. In general, a goal denoted by g indicates the end goal for the current episode, with g P denoting the sub-goals given by the Planner. N EX and N P L denote the respective rollout lengths of the Executor and Planner. Executor Building upon previous RL frameworks, an Executor is optimised using standard policy learning techniques . A key difference between existing non-hierarchical Figure 2: A system overview of PLEX which consists of two main components: a Planner and Executor agent. The Executor's task is to perform a series of actions such that it will traverse the environment towards a target location given by a point-goal vector (a sub-goal that is provided by the Planner). The Planner's task is to generate sub-goals that help navigate the entire system to an end goal (which only the Planner can see). An environment's state is comprised of an RGB-D observation, an egocentric map and a point-goal vector measurement pointing towards a target location as defined by. The Executor is provided with the RGB-D observation, a sub-goal generated by the Planner and returns an Executor Latent Information (ELI) vector that summarises its rollout. The Planner sees an egocentric map with a roughly 4m view horizon and the ELI vector. The Planner and Executor agents are trained simultaneously and using the sub-goal by the Planner and ELI vector by the Executor, we create a two-way communication channel, enabling our framework to regulate the flow of information between our two agents. approaches and our approach is the Executor End State or sub-goal g P, which is not provided by the environment but rather by a Planner policy. Consequently, the rewards provided to the Executor are sampled using the sub-goal (i.e.: r i ∼ R(r i |s i−1, a i, g P)). In line with this, the Executor policy also provides an Executor Latent Information (ELI) vector which summarises its rollout experience back to the Planner policy to provide feedback for the Planner towards planning the next sub-goal. In effect, this creates a cooperative feedback loop between the Executor and Planner. A detailed outline of the Executor routine is provided by Algorithm 1. Planner The Planner does not invoke the environment directly, but it modifies the goal and rewards observed by the Executor when the Executor interacts with the environment. This modification is achieved through the generation of planned sub-goals given to the Executor by the Planner. As such, given a state s and an ELI vector provided by the Executor, the Planner produces a subgoal. Using this sub-goal, the Executor routine (Algorithm 1) is invoked and a reward is returned to the Planner which provides it with feedback on the generated sub-goal. Upon optimising the Planner, we note that the Executor should not be directly affected from the Planner optimisation process (i.e. we terminate backpropagated gradients from the Planner policy before they reach the Executor policy). In other words, the ELI vector provided by the Executor does not affect the Executor directly, but implicitly through improving the Planner policy's generated sub-goals since providing direct feedback to the Executor through the ELI vector will in a redundant Planner. Algorithm 2 fully outlines our approach and details the Planner routine. We focus on the indoor PointGoal navigation task as defined in. In this setup, a point-goal vector providing the distance and angle towards the target point in the environment is assumed. If the indoor environment would be an empty space, this task would be trivial. However, in a more realistic, real world setting, the agent is spawned in a room within the environment and the target may be in a different room. In such a case, the agent needs to effectively navigate around obstacles and plan its future actions in the presence of other rooms that may or may not lead to the target point. The environment provides RGB-D sensory information and inaccurate access to the location and orientation (as could be obtained using a fusion of GPS and IMU sensors ). From these sensors, the environment can provide a coarse egocentric top-down map; we construct our egocentric map by adapting a similar approach in 1 (Section A.1 details the method of for constructing the egocentric map). The top-down map has a resolution of 0.5m per pixel which is a reasonable assumption given the provided sensors. In our hierarchical problem formulation, the Planner is provided with the egocentric, top-down map of size 32 × 32 pixels, the point-goal vector (translated to ego-map point-goal) (distance, direction) and the Executor Latent Information (ELI) provided by the Executor (a vector of 128 values). The Planner emits a binary category distribution for continue or stop operations and two continuous sub-goal distributions: the distance N (µ ρ, σ ρ) and direction N (µ θ, σ θ). The Executor is provided with RGB-D sensory information of 256 × 256 resolution along with the sub-goal computed by the Planner and emits a four category distribution with the actions described in Section 4.3, and emits the ELI vector which is returned to the Planner when the Executor stops or performs the maximum number of actions, N EX (note that for the Executor, the stop action does not stop the entire episode but simply returns control back to the Planner). Our Planner has a perception model for extracting embeddings from the egocentric map. This perception model is a CNN composed of 2 convolutional layers with filter size 4 × 4 and 3 × 3 with a stride of 2 and 1 respectively with ReLU activation. The output of these convolution layers is flattened and concatenated with the ELI vector and transferred to the last fullyconnected linear layer which outputs a 512 vector with ReLU activation. This output is concatenated with the 2-valued point-goal vector. The Executor's perception model consists of 3 convolution layers with filter sizes of 8 × 8, 4 × 4 and 3 × 3 with strides of 4, 2 and 1 respectively and ReLU activations. This is followed by a fullyconnected linear layer which outputs an embedding of 512 or 128, which depends on the model variant (LSTM or SMT respectively). We concatenate the 2-valued sub-goal vector to the output embedding. For the LSTM memory model, we employ the GRU formulation with hidden state of size 512. For the SMT memory model, we use an 8 multi-head attention mechanism (we do not employ the furthest point sampling technique used in as it did not affect the performance of our model). The output of either the GRU component or the SMT is the action embedding (a vector of 128 values corresponding to the output state vector of the LSTM) and also functions as the ELI vector. Both the Planner and the Executor policies are trained using the Proximal Policy Optimisation algorithm from , following the original settings found in the paper except for parameters specific to Algorithms 1 and 2: N EX = 10, γ EX = 0.9, N P L = 128, γ P L = 0.95. Note that the values of N EX and γ EX are chosen such that the Executor will focus on short-term rewards; this in turn will implicitly encourage the Planner to generate subsequent sub-goals which are nearby (further discussed in Section 4.4). Habitat is a modular environment API and can be used with realistic indoor environments coupled with tasks such as PointGoal navigation and Embodied Question Answering (a). For our experiments, we use the Gibson dataset and perform the PointGoal navigation task . We use the same train-test split as provided in which consists of 72 train scenes with 4.9M PointGoal navigation tasks and 16 unseen test scenes comprised of 1k PointGoal navigation tasks. The environment accepts 4 actions: [Forward, Turn left, Turn right, Stop], where invoking the stop action ends an episode with an episode running to a maximum of 500 steps otherwise. The reward structure is similar to with a success providing a reward of 10; otherwise, the agent's reward is the change in geodesic distance between the agent and the target with an additional slack of λ = −0.01. In addition, we penalise for obstacle collision with a cost c = −0.01. This additional collision penalty is a desirable property for embodied agents . In Section 3, we discussed the negative impact a long rollout horizon has on the exploration efficiency. The context was in a setup which allowed two actions, and did not take into account the complexities of realistic environments (such as obstacles and more actions). Hence, to choose an appropriate rollout length for the Executor (given the chosen environment and sensory input), we perform a short experiment which analyses a random exploration policy within the proposed setup. We randomly choose 500 starting locations from the training scenes, with a different initial starting point for recurring scenes. The agent can take 3 actions: [Forward, Turn Left, Turn Right], where the "Stop" action is excluded so that the rollout length can be inspected without being shortened. As we increase the number of random actions the random policy takes (N =), we inspect two values: the geodesic distance between the starting position and the end position of the policy (Fig. 3a), and the geodesic distance normalised with the number of random actions taken by the policy (Fig. 3b). The geodesic distance normalised by policy actions indicates the average distance travelled per number of actions and is an indication of exploration efficiency. As such, for initial exploration, we would generally want this value to be high. Fig. 3b observes that as the rollout horizon of the random policy increases, the exploration efficiency begins to rapidly decline. Whilst this decline is not exponential as suggested by Proposition 1, it provides us with further motivation not to set N EX too high. Further, we need to consider a large enough N EX to enable the Executor explore its environment. Given Figs. 3a and 3b and given a egocentric map with a resolution of 0.5m per pixel, N EX = 10 appears to maintain a good balance between the two criteria we wish to impose on the Executor: a high probability of exploring more than a unit distance in the egocentric map, along with high exploration efficiency. Baselines We compare our approach to two baseline methods PPO LSTM and PPO SMT. The former refers to the method found in , and the latter is a recent approach found in. For comparison, we employ two variants of our PLEX framework: PLEX LSTM and PLEX SMT. This comparison allows a fair comparison regarding the benefits toward improved sample efficiency that our framework offers for both compared baselines. Evaluation Metrics For evaluating the performance on the PointGoal task, we examine the reward each method obtains and the success ratio of each method. A success for an episode is defined when the agent provides a stop action and the geodesic shortest path distance between the agent and its end goal is less than 0.2m. We do not use the Shortest Path Length (SPL) as a evaluation metric, The observed success rate of each model. In this case, a success is defined as the agent issuing a stop action when the geodesic shortest path distance between the agent and its goal is less than 0.2m. Note that across both reward and success rate curves, the baseline methods exhibit a slow down in learning at around 15M environment steps whilst our method remains relatively robust and is able to maintain a steady learning rate. as defined by since maximising on this metric as an objective can in agents which exhibit undesirable behaviours (such as approaching too closely and colliding with obstacles). Instead, we observe the reward which is a unitless measure and penalises undesirable behaviours such as obstacle collision and moving in the wrong direction. This metric, along with success rates, provides good indicators for assessing performance on this task . Quantitative Results In Figs. 4a and 4b, we show the respective reward and success ratios of each method. Similarly, in Table 1, we outline the mean and variance of the final rewards and success ratios for all the methods after 30M environment steps. Each experiment was repeated 10 times across all methods. Note that the LSTM variant of our method (PLEX LSTM) achieves a success rate of 0.81 which is only slightly higher than the PPO LSTM reported in. The key difference is the significant lower number of environment steps (approximately half) that our method required for achieving a similar . The key idea behind the ELI vector is to provide additional information to the Planner that may not be observed in the egocentric map. Hence, the ELI vector is a mechanism that can be used by the Executor to relay a compressed representation of its previous observations to the Planner; assisting it towards generating sub-goals. Since the ELI vector is an integration of previous observations outputted by the Executor's LSTM component, we refer to it as a summary of the Executor's rollout. We adapt a hierarchical framework by , where conveniently, we can modify the Q-learning approach to a policy-based approach. In this context, our Executor replaces the Performance Metrics PPO SMT PPO LSTM Table 1: Avg. Reward and Avg. Success across the baseline models and our method applied to each method. Note that whilst the performance difference between the two baselines is small, both of our models outperform the baselines by a significant margin. Specifically, our PLEX LSTM variant shows a significant improvement over both baselines, achieving a higher than the baseline PPO LSTM model trained on more than double the number of environments steps. The performance of our method compared with PPO LSTM on close and far subsets of the test set. In this case, close goals were selected when mean geodesic shortest path distance was less than 5.76m and were classified as far goals otherwise. "Controller" and our Planner serves as the "Meta Controller". This allows us to assess the effect of not providing the Planner with an ELI vector. Empirically, these are shown in Fig. 5a. We can see that when the ELI vector is not provided, the communication channel from the Executor to the Planner is removed and we observe a decline in performance. This observation aligns well with our framework and demonstrates the value of enabling communication from the Executor back to the Planner. Across both reward and success ratio curves, the two baseline methods gradually exhibit a slow down behaviour starting at around 15M timesteps. In contrast, our PLEX framework appears relatively robust, demonstrating a more consistent learning pace. This highlights a key difference between our hierarchical method compared to the baseline approaches. Both baselines and our method are able to successfully learn simpler tasks, where the goal is relatively close to the agent's spawn location. However, for tasks where the goal is further away, the need for a more structured approach is necessary. Our empirical illustrate a potential drawback of LSTM-like units (and by extension, GRU), where they may suffer when learning long time horizon tasks. We divide the test set into two subsets containing close and far goals using a dividing criteria which uses the mean geodesic shortest path between the agent's spawn location and its end goal. In this case, close goals were selected when mean distance was less than 5.76m and were classified as far goals otherwise. The for this experiment are shown in Fig. 5b where the PPO LSTM is unable to navigate to the far goals in the allocated number of environment steps. In contrast, our method is far more successful in this task and is able to achieve a higher score on reaching far goals due to the extra reward provided for navigating to further away targets. Although the PPO LSTM baseline does eventually learn to navigate to further targets, it requires a much larger number of training iterations and environment invocations. In this work, we present a hierarchical reinforcement learning approach for solving PointGoal navigation tasks. Our proposed approach uses a cooperative learning strategy in which two agents, an Executor and a Planner are jointly learned to solve this task. This is enabled through a two-way communication channel established between the two agents through the use of an Executor Latent Information vector provided by the Executor and sub-goals generated by the Planner. We motivate the use of this hierarchical approach both theoretically, as well as through empirical experiments which demonstrate a significant improvement in sampling efficiency of our approach, allowing our structured approach to perform significantly better on increasingly harder tasks when compared to baseline approaches. Given an embodied agent trajectory of length N with corresponding depth images {D n} N n=1 ∈ R H×W and relative camera poses {T n} N n=1 ∈ SE, the set of 3D points P n ∈ R HW ×3 for frame n are extracted as follow: where the camera intrinsic matrix is K ∈ R 3×3. We then define the global set of all points at the end of the trajectory as: M ∈ R HW N ×3. The set M starts empty and is iteratively updated by the following rule: M n = M n−1 ∪ P n As the set of all points M is built, the egocentric map can be derived by projecting all points in M to a plane (we lose the "height" dimension and points projected to the same slot are resolved by taking the highest point), cropping and aligning a box using the vector of the agent's facing direction which in an egocentric map. Proof. The moment generating function of S N is given by: The probability of the Bernoulli sum being greater then αN with α > 1 2 can be bounded using the Chernoff bound: Analytically solving this bound, we obtain θ * = log α 1−α. We are specifically interested in the case where α > 1 2, and as a θ * > 0. Substituting θ * into Eq. 5 and defining β ≡ log 1 1−α + α log α 1−α concludes the proof. Algorithm 1 A pseudo algorithm of Executor Policy Gradient; Actor-Critic variant N EX ← initialise Executor rollout length γ EX ← initialise Executor discount factor g P ← initialise Planner goal r P = 0 initialise Planner reward buf f ers[s b, a b, r b] ← initialise state, action, reward buffers for i = 1 to N EX do a i, e θ EX ∼ π θ EX (a i, e θ EX |s i−1, g P) Sample policy for action and latent information s i, t i ∼ P(s i, t i |s i−1, a i) Sample environment for state and terminal r i ∼ R(r i |s i−1, a i, g P) Sample reward given Planner goal accumulate r P ← r ∼ R(r|s i−1, a i, g) Sample reward given end goal update buf f ers ← s i−1, a i, r i end for π θ EX ← update(π θ EX, buf f ers, γ EX) (as in PPO ) return (s i, r P, t i, e θ EX)
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1g7xT4Kwr
We present a hierarchical learning framework for navigation within an embodied learning setting
Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step ---adding a mean shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. We define input invariance as the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy a input invariance property are unreliable and can lead to misleading and inaccurate attribution. While considerable research has focused on discerning the decision process of neural networks BID1 BID9 BID2 BID15 BID12 BID0 BID14 BID6 BID5 BID16 BID13 BID11 BID4, there remains a trade-off between model complexity and interpretability. Research to address this tension is urgently needed; reliable explanations build trust with users, helps identify points of model failure, and removes barriers to entry for the deployment of deep neural networks in domains high stakes like health care and others. In deep models, data representation is delegated to the model. We cannot generally say in an informative way what led to a model prediction. Instead, saliency methods aim to infer insights about the f (x) learnt by the model by ranking the explanatory power of constituent inputs. While unified in purpose, these methods are surprisingly divergent and non-overlapping in outcome. Evaluating the reliability of these methods is complicated by a lack of ground truth, as ground truth would depend upon full transparency into how a model arrives at a decision -the very problem we are trying to solve for in the first place BID13 BID4.Given the need for a quantitative method of comparison, several properties such as completeness BID0 BID13, implementation invariance and sensitivity BID13 have been articulated as desirable to ensure that saliency methods are reliable. Implementation invariance, proposed as an axiom for attribution methods by BID13, is the requirement that functionally equivalent networks (models with different architectures but equal outputs all inputs), always attribute in an identical way. This work posits that a second invariance axiom, which we term input invariance, needs to be satisfied to ensure reliable interpretation of input contribution to the model prediction. Input invariance requires that the saliency method mirror the sensitivity of the model with respect to transformations of the input. We demonstrate that numerous methods do not satisfy input invariance using a simple transformation -mean shifts of the input-that does not affect model prediction or weights. We limit our treatment of input invariance to showing that there exist cases where this property is not satisfied and welcome future research on a broader treatment of this topic. In this work we:• introduce the axiom input invariance and demonstrate that certain saliency methods do not satisfy this property when considering a simple mean shift in the input. (See FIG3).• show that when input invariance is missing, the saliency method becomes unreliable and misleading. Using two example reference points for each method we demonstrate that changing the reference causes the attribution to diverge. The attributions are visualized by multiplying them with the input image as is done in the IG paper 1 BID13. Visualisations were made on ImageNet BID7 and the VGG16 architecture BID9.• demonstrate that "reference point" methods-Integrated gradients and the Deep Taylor Decomposition-have diverging attribution and input invariance breaking points that depends upon the choice of reference FIG0.In Section 2, we detail our experiment framework. In Section 3, we determine that while the model is invariant to the input transformation considered, several saliency methods attribute to the mean shift. In Section 4 we discuss "reference point" methods and illustrate the importance of choosing an appropriate reference before discussing some directions of future research in Section 5. We show that, by construction, the bias of a neural network compensates for the mean shift ing in two networks with identical weights and predictions. We first demonstrate this point and then describe the details of our experiment setup to evaluate the input invariance of a set of saliency methods. We compare the attribution across two networks, f 1 (x) and f 2 (x). f 1 (x) is a network trained on input x i 1 that denotes sample i from training set X 1. The classification task of network 1 is: DISPLAYFORM0 is a network that predicts the classification of a transformed input x ∀i, DISPLAYFORM1 Network 1 and 2 differ only by construction. Consider the first layer neuron before non-linearity in DISPLAYFORM2 We alter the biases in the first layer neuron by adding the mean shift m 2. This now becomes Network 2: DISPLAYFORM3 As a the first layer activations are the same for f 1 (x) and f 2 (x): DISPLAYFORM4 Note that the gradient with respect to the input remains unchanged as well: DISPLAYFORM5 We have shown that Network 2 cancels out the mean shift transformation. This means that f 1 (x) and f 2 (x) have identical weights and produce the same output for the corresponding samples, DISPLAYFORM6 In the implementation of this experimental framework, Network 1 is a 3 layer multi-layer perceptron with 1024 ReLu-activated neurons each. Network 1 classifies MNIST image inputs in a encoding. Network 2 classifies MNIST image inputs in a [-1,0] MNIST encoding. The first network is trained for 10 epochs using mini-batch stochastic gradient descent (SGD). The second network is created using the approach above. The final accuracy is 98.3% for both 3. In 3.1 we introduce key approaches to the classification of inputs as salient and the saliency methods we evaluate. In 3.2 we find that gradient and signal methods are input invariant. In 3.3 we find that most attribution methods considered have points where they start to break down. Most saliency research to date has centered on convolutional neural networks. These saliency methods broadly fall into three different categories:1. Gradients (Sensitivity) BID1 BID10 ) shows how a small change to the input affects the classification score for the output of interest.2. Signal methods such as DeConvNet BID15, Guided BackProp BID12 and PatternNet BID4 aim to isolate input patterns that stimulate neuron activation in higher layers.3. Attribution methods such as Deep-Taylor Decomposition BID5 and Integrated Gradients BID13 ) assign importance to input dimensions by decomposing the value y j at an output neuron j into contributions from the individual input dimensions: DISPLAYFORM0 s j is the decomposition into input contributions and has the same number of dimensions as x, A(x) j signifies the attribution method applied to output j for sample x. Attribution methods are distinguished from gradients by the insistence on completeness: the sum of all attributions should be approximately equal to the original output y i. We consider the input invariance of each category separately (by evaluating raw gradients, GuidedBackprop, Integrated Gradients and Deep Taylor Decomposition) and also benchmark the input invariance of SmoothGrad (BID11), a method that wraps around an underlying saliency approach and uses the addition of noise to produce a sharper visualization of the saliency heatmap. The experiment setup and methodology is as described in Section 2. Each method is evaluated by comparing the saliency heatmaps for the predictions of network 1 and 2, where x i 2 is simply the mean shifted input (x i 1 + m 2). A saliency method that is not input invariant will not produce identical saliency heatmap for Network 1 and 2 despite the mean shift of the input. Sensitivity and signal methods are not sensitive to the mean shift in inputs. In FIG2 raw gradients, PatternNet (PN, BID4) and Guided Backprop produce identical saliency heatmaps for both networks. Intuitively, gradient, PN and GB are input invariant given that we are comparing two networks with an identical f (x). Both methods determine attribution entirely as a function of the network/pattern weights and thus will be input invariant as long as we are comparing networks with identical weights. In the same manner, we can say that these methods will not be input invariant when comparing networks with different weights (even if we consider models with different architectures but identical predictions for every input). We evaluate the following attribution methods: gradient times input (GI), integrated gradients (IG, BID13) and the deep-taylor decomposition (DTD, BID5).In 3.3.1 we find GI to be sensitive to meaningless input shifts. In 3.3.2 we group discussion of IG and DTD under "reference point" methods because both require that attribution is done in reference to a defined point. We find that the choice of reference point can cause input invariance to become arbitrary. We find that the multiplication of raw gradients by the image breaks attribution reliability. In FIG3 GI produces different saliency heatmaps for both networks.. Gradient x Input, IG and DTD with a zero reference point, which is equivalent to LRP BID0 BID5, are not reliable and produce different attribution for each network. IG with a black image reference point and DTD with a PA reference point are not sensitive to the transformation of the input. In 3.2 we determined that a heatmap of gradients alone is not sensitive to the input transformation. GI multiplies the gradient w.r.t. the input with the input image. DISPLAYFORM0 Multiplying by the input means attribution is no longer reliable because the input shift is carried through to final attribution. Naive multiplication by the input, as noted by BID11, also constrains attribution without justification to inputs that are not 0. Both Integrated Gradients (IG, BID13) and Deep Taylor Decomposition (DTD, BID5) determine the importance of inputs relative to a reference point. DTD refers to this as the root point and IG terms the reference point a baseline. The choice of reference point is not determined a priori by the method and instead left to end user. The choice of reference point determines all subsequent attribution. In FIG0 IG and DTD show different attribution depending on the choice of reference point. We show that the certain reference point also cause IG and DTD to are not input invariant. Integrated gradients (IG) Integrated Gradients (IG, BID13) attribute the predicted score to each input with respect to a baseline x 0. This is achieved by constructing a set of inputs interpolating between the baseline and the input. DISPLAYFORM0 Since this integral cannot be computed analytically, it is approximated by a finite sum ranging over α ∈. DISPLAYFORM1 We evaluate whether two possible IG reference points satisfy input invariance. Firstly, we consider an image populated uniformly with the minimum pixel from the dataset (x 0 = min(x)) (black image) and a zero vector image. We find that IG attribution under certain reference points is not input invariant. In FIG3, IG with black reference point produces identical attribution heatmaps whereas IG with a zero vector reference point is not input invariant. IG using a black reference point is not sensitive to the mean input shift because x 0 = min(x) is determined after the mean shift of the input so the difference between x and x 0 remains the same for both networks. In network 1 this is (x 1) − min(x 1) and in network 2 this is (x 2 + m 2) − min(x 2 + m 2).IG with a zero vector reference point is not input invariant because while the difference in network 1 is (x 1 − x 0), the difference in network 2 becomes (x 2 + m 2) − x 0.Deep Taylor Decomposition (DTD) determines attribution relative to a reference point neuron. DTD can satisfy input invariant if the right reference point is chosen. In the general formulation, the attribution of an input neuron j is initialized to be equal to the output of that neuron. The attribution of other output neurons is set to zero. This attribution is backpropagated to its input neurons using the following distribution rule where s l j is the attribution assigned to neuron j in layer l: DISPLAYFORM2 We evaluate the input invariance of DTD using a reference point determined by Layer-wise Relevance Propagation (LRP) and PatternAttribution (PA). In FIG3, DTD satisfies input invariance when using a reference point defined by PA however it loses reliability when using a reference point defined by LRP.Layer-wise Relevance Propagation (LRP, BID0) is sensitive to the input shift because it is a case of DTD where a zero vector is chosen as the root point. 2. The back-propagation rule becomes: DISPLAYFORM3 s l−1,j depends only upon the input and so attribution will change between network 1 and 2 because x 1 andx 2 differ by a constant vector. PatternAttribution (PA) satisfies input invariance because the reference point x 0 is defined as the natural direction of variation in the data BID4. The natural direction of the data is determined based upon covariances and thus compensates explicitly for the mean in the data. Therefore it is by construction input invariant. The PA root point is: DISPLAYFORM4 where DISPLAYFORM5 In a linear model: DISPLAYFORM6 For neurons followed by a ReLu non-linearity the vector a accounts for the non-linearity and is computed as: DISPLAYFORM7.Here E + denotes the expectation taken over values where y is positive. Figure 4: Smoothgrad inherits the invariance properties of the underlying attribution method. SG is not sensitive to the input transformation for gradient and signal methods (SG-PA and and SG-GB). SG lacks input invariance for integrated gradients and deep taylor decomposition when a zero vector refernce point is used, but is not sensitive when PatternAttribution (SG-PA) or a black image (SG-Black) are used. SG is not input invariant for gradient x input. PA reduces to the following step: DISPLAYFORM8 The vector a depends upon covariances and thus removes the mean shift of the input. The attribution for both networks is identical. SmoothGrad replaces the input with N identical versions of the input with added random noise. These noisy inputs are injected into the underlying attribution method and final attribution is the average attribution across N. For example, if the underlying methods are gradients w.r.t. the input. g(x) j = ∂f (x)j ∂x SG becomes: DISPLAYFORM0 SG often in aesthetically sharper visualizations when applied to multi-layer neural networks with non-linearities. SG does not alter the attribution method itself so will always inherit the input Figure 5: Evaluation of attribution method sensitivity using MNIST. Gradient x Input, IG with both a black and zero reference point and DTD with a LRP reference point, do not satisfy input invariance and produce different attribution for each network. DTD with a PA reference point are not sensitive to the transformation of the input.invariance of the underlying method. In Fig. 4 applying SG on top of gradients and signal methods (PA and GB) produces identical saliency maps. SG is not input invariant when applied to gradient x input, LRP and zero vector reference points which compares SG heatmaps generated for all methods discussed so far. SG is not sensitive to the input transformation when applied to PA and a black image. IG and DTD do not satisfy input invariance under certain reference points. The reference point determines subsequent attribution. In FIG0 attribution visually diverges for the same method if multiple reference points are considered. A reasonable reference point for IG and DTD will naturally depend upon domain and task. Unintentional misrepresentation of the model is very possible when the choice of vantage point can lead to very different . Thus far, we have discussed attribution for image recognition tasks with the assumption that preprocessing steps are known and visual inspection of the points determined to be salient is possible. For Audio and Language based models where input interaction is more intricate, attribution becomes even more challenging. If we cannot determine the implications of reference point choice, we are limited in our ability to say anything about the reliability of the method. To demonstrate this point, we construct a constant shift of the input that takes advantage of attribution points of failure discussed thus far. Almost all methods are sensitive to this input transformation which in a misleading explanation of the model prediction. Network 1 is the same as introduced in Section 2. We consider a transformation x ∀i, DISPLAYFORM0 Network 2 is identical to network 1 by construction (see Section 2). Note that x In Fig. 5 all attribution methods except for PA are sensitive to this constant shift. The is that we are able to manipulate the attribution heatmap of an MNIST prediction so that the chosen samplê x appears. Using a black image as a reference point for IG no longer satisfies input invariance (as it did in the experiments in Section 3).The samplex can be any abitrary vector. We conduct the same experiment with a hand drawn kitten image. We construct m 2 by choosing a desired attributionŝ that should be assigned to a specific samplex when the gradient is multiplied with the input. We compute a m 2 that will ensure the specific x i 2 receives the desired attribution as follows: DISPLAYFORM1 To make sure that the original image is still recognizable as belonging to its class, we clip the shift to be within [-.3,.3]. Of course, the attributions of the other samples in the dataset is impacted too. In FIG6 we see that we are again able to purposefully misrepresent the explanation of the model prediction. It is important to note that that some of these methods would have satisfied input invariance if the data had been normalized prior to attribution. For example, IG with a black baseline will satisfy input invariance if the data is always normalized. However, this is far from a systematic treatment of the reference point selection and there are cases outside of our experiment scope where this would not be sufficient. We believe an open research question is furthering the understanding of reference point choice that guarantee reliability without relying on case-by-case solutions. Saliency methods are powerful tools to gain intuition about our model. We consider some examples that can cause a break in the reliability of these methods. We show that we are able to purposefully create a deceptive explanation of the network using a hand drawn kitten image. We introduce input invariance as a prerequisite for reliable attribution. Our treatment of input invariance is restricted to demonstrating there is at least one input transformation that causes attribution to fail. We hope this work drives further discussion on this subject. We also acknowledge that saliency methods may still provide intuition for image recognition tasks even if they are not input invariant. Our work is motivated in part because while we can visually inspect for catasthropic attribution failure in images, other modalities (like audio or word vectors) are more opaque and prone to unintentional misrepresentation.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1Oen--RW
Attribution can sometimes be misleading
Large Transformer models routinely achieve state-of-the-art on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The ing model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. The Transformer architecture is widely used in natural language processing and yields state-of-the-art on a number of tasks. To obtain these , researchers have resorted to training ever larger Transformer models. The number of parameters exceeds 0.5B per layer in the largest configuration reported in while the number of layers goes up to 64 in . Transformer models are also used on increasingly long sequences. Up to 11 thousand tokens of text in a single example were processed in and when processing other modalities, like music and images, even longer sequences are commonplace. These large-scale long-sequence models yield great but strain resources to the point where some argue that this trend is breaking NLP research 1. Many large Transformer models can only realistically be trained in large industrial research laboratories and such models trained with model parallelism cannot even be fine-tuned on a single GPU as their memory requirements demand a multi-accelerator hardware setup even for a single training step. Do large Transformer models fundamentally require such huge resources or are they simply inefficient? Consider the following calculation: the 0.5B parameters used in the largest reported Transformer layer account for 2GB of memory. Activations for 64K tokens with embedding size 1024 and batch size 8 account for 64K × 1K × 8 = 0.5B floats, requiring another 2GB of memory. If our memory use was only per-layer, then we should fairly easily fit a large Transformer even on sequences of length 64K on a single accelerator. Further, the whole corpus used to train BERT only requires 17GB to store. Why is it then that we cannot even fine-tune these models on single machines? The above estimate includes only per-layer memory and input activations cost and does not take into account the following major sources of memory use in the Transformer. • Memory in a model with N layers is N -times larger than in a single-layer model due to the fact that activations need to be stored for back-propagation. • Since the depth d f f of intermediate feed-forward layers is often much larger than the depth d model of attention activations, it accounts for a large fraction of memory use. • Attention on sequences of length L is O(L 2) in both computational and memory complexity, so even for a single sequence of 64K tokens can exhaust accelerator memory. We introduce the Reformer model which solves these problems using the following techniques: 1 https://hackingsemantics.xyz/2019/leaderboards/ • Reversible layers, first introduced in , enable storing only a single copy of activations in the whole model, so the N factor disappears. • Splitting activations inside feed-forward layers and processing them in chunks removes the d f f factor and saves memory inside feed-forward layers. • Approximate attention computation based on locality-sensitive hashing replaces the O(L 2) factor in attention layers with O(L) and so allows operating on long sequences. We study these techniques and show that they have negligible impact on the training process compared to the standard Transformer. Splitting activations in fact only affects the implementation; it is numerically identical to the layers used in the Transformer. Applying reversible residuals instead of the standard ones does change the model but has a negligible effect on training in all configurations we experimented with. Finally, locality-sensitive hashing in attention is a more major change that can influence the training dynamics, depending on the number of concurrent hashes used. We study this parameter and find a value which is both efficient to use and yields very close to full attention. We experiment on a synthetic task, a text task (enwik8) with sequences of length 64K and an image generation task (imagenet-64 generation) with sequences of length 12K. In both cases we show that Reformer matches the obtained with full Transformer but runs much faster, especially on the text task, and with orders of magnitude better memory efficiency. Dot-product attention. The standard attention used in the Transformer is the scaled dot-product attention . The input consists of queries and keys of dimension d k, and values of dimension d v. The dot products of the query with all keys are computed, scaled by √ d k, and a softmax function is applied to obtain the weights on the values. In practice, the attention function on a set of queries is computed simultaneously, packed together into a matrix Q. Assuming the keys and values are also packed together into matrices K and V, the matrix of outputs is defined as: Multi-head attention. length, length]. In the experimental section we train a model on sequences of length 64K -in this case, even at batch-size of 1, this is a 64K × 64K matrix, which in 32-bit floats would take 16GB of memory. This is impractical and has hindered the use of the Transformer for long sequences. But it is important to note that the QK T matrix does not need to be fully materialized in memory. The attention can indeed be computed for each query q i separately, only calculating softmax(V once in memory, and then re-computing it on the backward pass when needed for gradients. This way of computing attention may be less efficient but it only uses memory proportional to length. We use this memory-efficient implementation of attention to run the full-attention baselines presented in the experimental section. Where do Q, K, V come from? The multi-head attention described above operates on keys, queries and values, but usually we are only given a single tensor of activations A of the shape [batch size, length, d model] -e.g., coming from embedding the tokens in a sentence into vectors. To build Q, K and V from A, the Transformer uses 3 different linear layers projecting A into Q, K and V with different parameters. For models with LSH attention, we want queries and keys (Q and Figure 1 : An angular locality sensitive hash uses random rotations of spherically projected points to establish buckets by an argmax over signed axes projections. In this highly simplified 2D depiction, two points x and y are unlikely to share the same hash buckets (above) for the three different angular hashes unless their spherical projections are close to one another (below). K) to be identical. This is easily achieved by using the same linear layer to go from A to Q and K, and a separate one for V. We call a model that behaves like this a shared-QK Transformer. It turns out that sharing QK does not affect the performance of Transformer, even if we additionally normalize the length of the keys K, as we show in the experimental Section 5. Hashing attention. For the LSH attention, we start with two tensors, Q=K and V of the shape [batch size, length, d model]. We keep the multi-head mechanism intact and focus on the attention computation from Equation 1. As already mentioned, the main issue is the term QK T, which has the shape [batch size, length, length]. But note that we are actually only interested in softmax(QK T). Since softmax is dominated by the largest elements, for each query q i we only need to focus on the keys in K that are closest to q i. For example, if K is of length 64K, for each q i we could only consider a small subset of, say, the 32 or 64 closest keys. That is much more efficient, but how can we find the nearest neighbors among the keys? Locality sensitive hashing. The problem of finding nearest neighbors quickly in high-dimensional spaces can be solved by locality-sensitive hashing (LSH). A hashing scheme that assigns each vector x to a hash h(x) is called locality-sensitive if nearby vectors get the same hash with high probability and distant ones do not. In our case, we actually only require that nearby vectors get the same hash with high probability and that hash-buckets are of similar size with high probability. We achieve this by employing random projections as follows (see Figure 1). To get b hashes, we first fix a random matrix R of size [d k, b/2]. We then define h(x) = arg max([xR; −xR]) where [u; v] denotes the concatenation of two vectors. This method is a known LSH scheme and is easy to implement and apply to batches of vectors. LSH attention. Knowing our LSH scheme and the general idea of hashing attention, we will now formalize the LSH attention we use in this paper. We first rewrite the equation for normal attention,, for a single query position i at a time: We introduce the notation P i to represent the set that the query at position i attends to, and z to denote the partition function (i.e. the normalizing term in the softmax). For clarity, we also omit scaling by For batching purposes we typically perform attention over a larger set P i = {0, 1, . . ., l} ⊇ P i while masking out elements not in P i: Now we turn to LSH attention, which we can think of in terms of restricting the set P i of target items a query position i can attend to, by only allowing attention within a single hash bucket. Figure 2 (a-b) shows a schematic comparison of full-attention with a hashed variant. Part (a) depicts that the attention matrix for full attention is typically sparse, but the computation does not take advantage of this sparsity. In (b), the queries and keys have been sorted according to their hash bucket. Since similar items fall in the same bucket with high probability, the full attention pattern can be approximated by only allowing attention within each bucket. Hash buckets in this formulation tend to be uneven in size, which makes it difficult to batch across buckets. Moreover, the number of queries and the number of keys within a bucket may be unequalin fact, it is possible for a bucket to contain many queries but no keys. To alleviate these issues, we first ensure that h(k j) = h(q j) by setting k j = qj qj. Next, we sort the queries by bucket number and, within each bucket, by sequence position; this defines a permutation where i → s i after sorting. In the sorted attention matrix, pairs from the same bucket will cluster near the diagonal (as depicted in Figure 2c). We can follow a batching approach where chunks of m consecutive queries (after sorting) attend to each other, and one chunk back (Figure 2d). Following our earlier notation, this corresponds to setting: If max i |P i | < m, then P i ⊆ P i. In practice we set m = 2l n buckets (where l is the sequence length). The average bucket size is l n buckets, and we assume that the probability of a bucket growing to twice that size is sufficiently low. The overall process of LSH attention is summarized in Figure 2. Multi-round LSH attention. With hashing, there is always a small probability that similar items nevertheless fall in different buckets. This probability can be reduced by doing multiple rounds of hashing with n rounds distinct hash functions {h, h,...}, such that: The multi-round case essentially involves performing LSH attention n rounds times in parallel; the details of the procedure are described in in Appendix A. Causal masking for shared-QK attention. In a Transformer decoder, masking (denoted by m(j, P i) in Equation 3 ) is used to prevent positions from attending into the future. To implement masking in LSH attention, we associate every query/key vector with a position index, re-order the position indices using the same permutations used to sort the query/key vectors, and then use a comparison operation to compute the mask. While attention to the future is not allowed, typical implementations of the Transformer do allow a position to attend to itself. Such behavior is undesirable in a shared-QK formulation because the dot-product of a query vector with itself will almost always be greater than the dot product of a query vector with a vector at another position. We therefore modify the masking to forbid a token from attending to itself, except in situations where a token has no other valid attention targets (e.g. the first token in a sequence). To verify the performance of LSH attention and study its behavior, we start with the following synthetic task: duplicate a sequence of symbols. In this task, each training and testing example has the form 0w0w where w ∈ {1, . . ., N} * is a sequence of symbols ranging from 1 to N (we use N = 127 in our experiments). An example with the word w of length 3 is given below. To study LSH attention, we train a language model on examples of the above form where each w is of length 511 (so the whole input 0w0w is of length 1024). As this is a language modeling task, we always predict the next symbol given all the previous ones, but we mask the loss and accuracy to only consider positions in the second half of the input, i.e., those that can actually be predicted. The above task can be solved perfectly (to accuracy 100% and loss 0) by a 1-layer Transformer model. Note though, that it requires non-local attention lookups, so it cannot be solved by any model relying on sparse attention with a limited span. To make it easy and fast to train but similar to models used in NLP, we use a 1-layer Transformer with d model = d f f = 256 and 4 heads. We train it for 150K steps in 4 different settings: with full attention, LSH attention with n rounds = 1, n rounds = 2 and n rounds = 4. From the summarized in Table 2 we see that a model trained with full attention can be immediately used with LSH attention, but at some loss of accuracy. When trained from scratch with LSH attention, the model trained with 4 hashes achieves almost perfect accuracy as well. Interestingly, the accuracy becomes perfect when evaluated with 8 hashes. It goes down when evaluated with 2 or 1 hashes. Models trained with less hashes show worse but even the model trained with just 1 hash performs almost perfectly when evaluated with 8 hashes. As the above section shows, the complexity of attention can be reduced from square in length to linear, provided an approximation is acceptable. But it is clear from Table 1 that each field starts with a b · n h · l term: the b · n h · l · d k, or alternatively b · l · d model cost cannot be avoided. Indeed, the activations before each layer are already of the size b · l · d model, so the memory use of the whole model with n l layers is at least b · l · d model · n l. Even worse: inside the feed-forward layers of Transformer this goes up to b · l · d f f · n l. In a big Transformer it is usual to set d f f = 4K and n l = 16 so with l = 64K this again would use an impractical 16GB of memory In this section, we show how to reduce this cost by first dealing with the n l part of the term using reversible layers and then showing how chunking can allow us to handle the d f f problem. RevNets. Reversible residual networks were introduced by where it was shown that they can replace ResNets for image classification. The main idea is to allow the activations at any given layer to be recovered from the activations at the following layer, using only the model parameters. Rather than having to checkpoint intermediate values for use in the backward pass, layers can be reversed one-by-one as back-propagation proceeds from the output of the network to its input. Whereas a normal residual layer performs a function x → y that operates on a single input and produces a single output and has the form y = x + F (x), a reversible layer works on pairs of inputs/outputs: (x 1, x 2) → (y 1, y 2), and follows the equations: A layer can be reversed by subtracting (rather than adding) the residuals: Reversible Transformer. We apply the RevNet idea to the Transformer by combining the attention and feed-forward layers inside the revnet block. In the notation above, F becomes an attention layer while G becomes the feed-forward layer. Note that Layer Normalization is moved inside the residual blocks. The reversible Transformer does not need to store activations in each layer and so gets rid of the n l term. In Section 5 we show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both x 1 and x 2 have size d model. Chunking. While reversibility covers the n l term, the thicker layers can still use a lot of memory. The feed-forward layer in particular can use intermediate vectors of dimensionality d f f = 4K or higher. However, computations in feed-forward layers are completely independent across positions in a sequence, so the computation can be split into c chunks: This layer is typically batched by performing operations for all positions in parallel, but operating on one chunk at a time can reduce memory. The reverse computation in and the backward pass are also chunked. In addition to the feed-forward layers, for models with large vocabulary (more than d model word types) we also chunk the log-probabilities at the output and calculate the loss for sections of the sequence at a time. Chunking, large batches and parameter reuse. With chunking and reversible layers the memory we use for activations in the whole network is independent of the number of layers. The same is not true for parameters though as their number grows with the number of layers. This problem is remedied though because we can swap layer parameters to and from CPU memory when this layer is not computing. In a standard Transformer this would be inefficient because memory transfer to CPU is slow. The batch size multiplied by length in Reformer is much larger though and therefore the amount of compute done with the parameters amortizes the cost of their transfer. The Transformer model introduced in has been used widely in natural language tasks and further extended to model diverse data such as music scores , and images ). Most notably, this model class has We assume n c = l/32 so 4l/n c = 128 and we write c = 128 2. max(bld f f, bn h ln r c)n l (bld f f + bn h n r lc)n l Reformer max(bld model, bn h ln r c) (bld f f + bn h n r lc)n l been applied successfully in the self-supervised training of extremely large language models (; . Given the enormous computational requirements of state of the art sequence models, there has been increasing interest in finding methods to reduce the memory footprint and computational requirements of Transformer models. In addition to standard methods such as precision reduction and gradient checkpointing , more efficient versions of the Transformer model's self-attention mechanism (a; b) have also recently been explored. In particular, leveraging sparsity in the attention layers has proved fruitful. OpenAI introduced the sparse Transformer which exploits a factorized sparse representation of attention. Using product-key attention to increase the key space has also been used to reduce memory requirements in the feed-forward layers with no loss in performance . Locality-sensitive hashing (LSH) has, to our knowledge, not been directly applied to Transformer attention layers before. But previous work using external memory with neural networks has dealt with memories of large sizes. The original implementation of memory networks and later work on scaling it ) used memory with size in the millions. The cost of doing so is that the memory must be fixed prior to training. Moreover, since during the beginning of training the model is unlikely to query the memory correctly, strong supervision is used to encourage the model to query memory locations that are useful. These hints are either given as additional supervising information by the task or determined heuristically as in. The requirement that the memory be fixed before has been removed in at the cost of memory size and later alleviated by. The last paper considered memory lookups with approximate nearest neighbors including both LSH and random kd-trees, but only for lookups in external memory. In this section we present experimental demonstrating the techniques described above. We analyze the techniques one-by-one to make clear which combinations have impact on performance. We start by showing that reversible layers and shared query-key spaces do not impact performance, then proceed to analyze hashing attention and finally the full Reformer model. We ran our experiments on the imagenet64 and enwik8-64K tasks, where the latter is a variant of enwik8 that is chunked into subsequences of 2 16 = 64K tokens. We use 3-layer models for our ablations so as to make it tractable to compare with the regular Transformer, which has high memory usage and performs full O(l 2) attention. All experiments have d model = 1024, d f f = 4096, and n heads = 8. They were trained with batch sizes of one sequence per GPU, with a total of 8 GPUs operating in parallel. We used the Adafactor optimizer for training our models. Code for training our models will be made publicly available. Effect of sharing QK. We first consider the effect of shared-QK attention on a regular Transformer model. Shared-QK attention sets k j = qj qj and prevents tokens from attending to themselves (except when no other context is available). In the left part of Figure 3, we plot perplexity curves for both regular and shared-QK attention. A shared query-key space does not perform worse than Figure 3: Effect of shared query-key space (left) and reversibility (right) on performance on enwik8 and imagenet64 training. The curves show bits per dim on held-out data. regular attention; in fact, for enwik8 it appears to train slightly faster. In other words, we are not sacrificing accuracy by switching to shared-QK attention. Effect of reversible layers. In the two plots on the right in Figure 3, we compare a regular Transformer per with the reversible one describe in Section 3. The two models have identical parameter counts, and the learning curves likewise appear to be nearly the same. These show that the memory savings in the reversible Transformer do not come at the expense of accuracy. LSH attention in Transformer. LSH attention is an approximation for full attention that, as evidenced in Figure 4, becomes more accurate as the number of hashes increases. At n rounds = 8, it already almost matches full attention. The computational cost of a model grows with the number of hashes, so this hyperparameter can be adjusted depending on the available compute budget. Additionally, as in Table 2, the number of hashes can be increased at evaluation time to produce more accurate . On the right half of Figure 5, we plot the speed of different attention types vs. the sequence length, while holding the total number of tokens fixed. We see that while regular attention becomes slower at longer sequence lengeth, LSH attention speed remains flat. Large Reformer models. To verify that the Reformer can indeed fit large models on a single core and train fast on long sequences, we train up to 20-layer big Reformers on enwik8 and imagenet64. As can be seen in Figure 5, these models fit into memory and train. We were not able to train Transformer baselines in this case as they are too slow and memory-hungry, but we see clear improvement with the number of layers. A 12-layer model on enwik8 trained for 20K steps with a dropout rate of 0.1 achieves 1.19 bits/dim on the test set. We also trained a 12-layer Reformer model for longer with further tuning and improvements and we reached 1.05 bits/dim on the enwiki8 test set 3. Reformer combines the modeling capacity of a Transformer with an architecture that can be executed efficiently on long sequences and with small memory use even for models with a large number of layers. We believe that this will help large, richly-parameterized Transformer models become more widespread and accessible. Also, the ability to handle long sequences opens the way for the use of the Reformer on many generative tasks. In addition to generating very long coherent text, the Reformer can bring the power of Transformer models to other domains like time-series forecasting, music, image and video generation. In this section we describe in more detail the multi-hash version of our LSH attention mechanism. We first repeat Equation from the main text, which describes a general formulation of attention with sparsity: exp (q i · k i − m(j, P i) − z(i, P i)) v j where m(j, P i) = ∞ if j / ∈ P i 0 otherwise In the multi-round case, a query position i can attend to key positions P i as defined in, which we also repeat here: For batching purposes, attention is performed on chunks of sorted queries/keys: Combining and gives: i that can be computed independently from other rounds, except for the inclusion of a term N i,j to avoid double-counting elements when constructing the union of P (r) i sets. In our implementation we fold the N i,j factor into the masking term m We also modify m (r) i,j to introduce a special case for i = j. This case is added because causal masking in a standard Transformer allows position i to attend to itself, which is not desirable in a shared-QK formulation. We set the mask to a large but finite value to disallow attention-in-place, except in the situation where a token has no other valid attention targets. For example, the first token in a sequence attends only to itself, because no prior context is available.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkgNKkHtvB
Efficient Transformer with locality-sensitive hashing and reversible layers
Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps. Reinforcement learning (RL) has been successful in a variety of areas such as continuous control , dialogue systems , and game-playing . However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments. Prior work on language grounding and language-based RL (see for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics (; ; ; ;), or the dynamics of the environment vary and are presented in language for some fixed goal. In practice, changes to goals and to environment dynamics tend to occur simultaneously-given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training. Our contributions are two-fold. First, we propose a grounded policy learning problem that we call Read to Fight Monsters (RTFM). In RTFM, the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate RTFM. Second, we propose txt2π to model the joint reasoning problem in RTFM. We show that txt2π generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM both in terms of sample efficiency and final win-rate on RTFM. Through curriculum learning where we adapt txt2π trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that txt2π attends to parts of the document relevant to the goal and environment observations, and that the ing agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of RTFM in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future. Language-conditioned policy learning. A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control and games to step-by-step navigation . In contrast to learning policies for imperative instructions, Branavan et al. (2011; 2012); infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal. Language grounding. Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images , games , robot control , and navigation . We study language grounding in interactive games similar to; or , where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics. We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training. To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on. In RTFM, the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure 1 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing Figure 1: RTFM requires jointly reasoning over the goal, a document describing environment dynamics, and environment observations. This figure shows key snapshots from a trained policy on one randomly sampled environment. Frame 1 shows the initial world. In 4, the agent approaches "fanatical sword", which beats the target "fire goblin". In 5, the agent acquires the sword. In 10, the agent evades the distractor "poison bat" while chasing the target. In 11, the agent engages the target and defeats it, thereby winning the episode. Sprites are used for visualisation -the agent observes cell content in text (shown in white). More examples are in appendix A. weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix H for details on collection and G for a list of entities in the game). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations. During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. "fire goblin" from "Order of the forest") to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. "fanatical sword"). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer). In order to win the game (e.g. Figure 1), the agent must 1. identify the target team from the goal (e.g. Order of the Forest) 2. identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx) 3. identify which monster is in the world (e.g. goblin), and its element (e.g. fire) 4. identify the modifiers that are effective against this element (e.g. fanatical, shimmering) 5. find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword) 6. pick up the correct item (e.g. fanatical sword) 7. engage the correct monster in combat (e.g. fire goblin). If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise. RTFM presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised-the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand. We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion. We propose the txt2π model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the Bidirectional Feature-wise Linear Modulation (FiLM 2) layer, which forms the core of our model. Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning and instruction following . In RTFM, the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, FiLM 2 builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure 2 shows the FiLM 2 layer. We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table 4 in appendix B. Let x text denote a fixed-length d text -dimensional representation of the text and X vis the representation of visual inputs with height H, width W, and d vis channels. Let Conv denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features: Unlike FiLM, we additionally modulate text features using visual features: The output of the FiLM 2 layer consists of the sum of the modulated features V, as well as a max-pooled summary s over this sum across spatial dimensions. We model interactions between observations from the environment, goal, and document using FiLM 2 layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive FiLM 2 layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for FiLM 2. The final FiLM 2 output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure 3 shows the txt2π model. Let E obs denote word embeddings corresponding to the observations from the environment, where E obs [:, :, i, j] represents the embeddings corresponding to the l obs -word string that describes the objects in location (i, j) in the grid-world. Let E doc, E inv, and E goal respectively denote the embeddings corresponding to the l doc -word document, the l inv -word inventory, and the l goal -word goal. We first compute a fixed-length summary c goal of the the goal using a bidirectional LSTM followed by self-attention . We abbreviate self-attention over the goal as c goal = selfattn(H goal). We similarly compute a summary of the inventory as c inv = selfattn(BiLSTM inv (E inv)). Next, we represent the document encoding conditioned on the goal using dot-product attention . We abbreviate attention over the document encoding conditioned on the goal summary as c doc = attend(H doc, c goal). Next, we build the joint representation of the inputs using successive FiLM 2 layers. At each layer, the visual input to the FiLM 2 layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature X pos consists of the x and y distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let [a; b] denote the feature-wise concatenation of a Eval 10×10 conv 24 ± 0 25 ± 1 13 ± 1 FiLM 49 ± 1 49 ± 2 32 ± 3 no task attn 49 ± 2 49 ± 2 35 ± 6 no vis attn 49 ± 2 49 ± 1 40±12 no text mod 49 ± 1 49 ± 2 35 ± 2 txt2π 84±21 83±21 66±22 BiLSTM vis-doc (E doc) is another encoding of the document similar to H goal, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For i = 0, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features V = [j E obs,j ; X pos]. We max pool a linear https://www.overleaf.com/project/5d540995ff1f2e4ec678b58etransform of the initial visual features to compute the initial visual summary denote visual summary of the last FiLM 2 layer. We compute the policy y policy and baseline y baseline as where MLP policy and MLP baseline are 2-layer multi-layer perceptrons with ReLU activation. We train using TorchBeast (Küttler et al., 2019), an implementation of IMPALA . Please refer to appendix D for details. We consider variants of RTFM by varying the size of the grid-world (6 × 6 vs 10 × 10), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section 3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information. We compare txt2π to the FiLM model by and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of txt2π. In no task attn, the document attention conditioned on the goal utterance (equation 16) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no vis attn, we do not attend over the document given the visual output of the previous layer (equation 18), and the document is instead represented through self-attention. Transfer from Transfer to 6 × 6 6 × 6 dyna 6 × 6 groups 6 × 6 nl 6 × 6 dyna groups 6 × 6 group nl 6 × 6 dyna nl 6 × 6 dyna group nl random 84 ± 20 26 ± 7 25 ± 3 45 ± 6 23 ± 2 25 ± 3 23 ± 2 23 ± 2 +6 × 6 85 ± 9 82 ± 19 78 ± 24 64 ± 12 52 ± 13 53 ± 18 40 ± 8 +dyna 77 ± 10 65 ± 16 43 ± 4 +group 65 ± 17 Table 2: Curriculum training . We keep 5 randomly initialised models through the entire curriculum. A cell in row i and column j shows transfer from the best-performing setting in the previous stage (bolded in row i − 1) to the new setting in column j. Each cell shows final mean and standard deviation of win rate on the training environments. Each experiment trains for 50 million frames, except for the initial stage (first row, 100 million instead). For the last stage (row 4), we also transfer to a 10 × 10 + dyna + group + nl variant and obtain 61 ± 18 win rate. In no text mod, text modulation using visual features (equation 6) is removed. Please see appendix C for model details on our model and baselines, and appendix D for training details. We compare txt2π to baselines and ablated variants on a simplified variant of RTFM in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure 4 shows that compared to baselines and ablated variants, txt2π is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks-it is the combination of ablated features that enables txt2π to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, ing in a ∼ 50% win rate. Table 1 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with txt2π outperforming FiLM and the CNN model. We find similar for txt2π, its ablated variants, and baselines on a separate, language-based rock-paper-scissors task in which the agent needs to deduce cyclic dependencies (which type beats which other type) through reading in order to acquire the correct item and defeat a monster. We observe that the performance of reading models transfer from training environments to new environments with unseen types and unseen dependencies. Compared to ablated variants and baselines, txt2π is more sample efficient and achieves higher performance on both training and new environment dynamics. When transferring to new environments, txt2π remains more sample efficient than the other models. Details on these experiments are found in appendix E. Train env Win rate Train Eval 6 × 6 6 × 6 65 ± 17 55 ± 22 10 × 10 55 ± 27 10 × 10 10 × 10 61 ± 18 43 ± 13 Table 3: Win rate when evaluating on new dynamics and world configurations for txt2π on the full RTFM problem. Due to the long sequence of co-references the agent must perform in order to solve the full RTFM (10 × 10 with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of RTFM. We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain 10×10 worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in (see Figure 13 in appendix F for training curves of each stage). We see that curriculum learning is crucial to making progress on RTFM, and that initial policy training (first row of Table 2) with additional complexities in any of the dimensions in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table 3 shows variants of the last stage of the curriculum in which the model was trained on 6 × 6 versions of the full RTFM and in which the model was trained on 10 × 10 versions of the full RTFM. We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve RTFM. This highlights the difficulties of the RTFM problem and suggests that there is significant room for improvement in developing better language grounded policy learners. Attention maps. Figure 5 shows attention conditioned on the goal and on observation summaries produced by intermediate FiLM 2 layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These suggests that attention mechanisms in txt2π help identify relevant information in the document. Analysis of trajectories and failure modes. We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full RTFM. We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials 1. We proposed RTFM, a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study RTFM, we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed txt2π, a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. txt2π outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, txt2π performs well on complex RTFM tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex RTFM problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans and induce hierarchical policies . A PLAYTHROUGH EXAMPLES These figures shows key snapshots from a trained policy on randomly sampled environments. Figure 6: The initial world is shown in 1. In 4, the agent avoids the target "lightning shaman" because it does not yet have "arcane spear", which beats the target. In 7 and 8, the agent is cornered by monsters. In 9, the agent is forced to engage in combat and loses. Hyperparameters. The txt2π used in our experiments consists of 5 consecutive FiLM 2 layers, each with 3x3 convolutions and padding and stride sizes of 1. The txt2π layers have channels of 16, 32, 64, 64, and 64, with residual connections from the 3rd layer to the 5th layer. The Goal-doc LSTM (see Figure 3) shares weight with the Goal LSTM. The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100. We use a word embedding dimension of 30. The input to the network is the concatenation of the observations V and text representations. The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory. These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell. Figure 8 illustrates the CNN baseline. The FiLM baseline encodes text in the same fashion as the CNN model. However, instead of using convolutional layers, each layer is a FiLM layer from. Note that in our case, the language representation is a self-attention over the LSTM states instead of a concatenation of terminal LSTM states. We train using an implementation of IMPALA . In particular, we use 20 actors and a batch size of 24. When unrolling actors, we use a maximum unroll length of 80 frames. Each episode lasts for a maximum of 1000 frames. We optimise using RMSProp with a learning rate of 0.005, which is annealed linearly for 100 million frames. We set α = 0.99 and = 0.01. During training, we apply a small negative reward for each time step of −0.02 and a discount factor of 0.99 to facilitate convergence. We additionally include a entropy cost to encourage exploration. Let y policy denote the policy. The entropy loss is calculated as In addition to policy gradient, we add in the entropy loss with a weight of 0.005 and the baseline loss with a weight of 0.5. The baseline loss is computed as the root mean square of the advantages . When tuning models, we perform a grid search using the training environments to select hyperparameters for each model. We train 5 runs for each configuration in order to report the mean and standard deviation. When transferring, we transfer each of the 5 runs to the new task and once again report the mean and standard deviation. Figure 10: Performance on the Rock-paper-scissors task across models. Left shows final performance on environments whose goals and dynamics were seen during training. Right shows performance on the environments whose goals and dynamics were not seen during training. E ROCK-PAPER-SCISSORS In addition to the main RTFM tasks, we also study a simpler formulation called Rock-paper-scissors that has a fixed goal. In Rock-paper-scissors, the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. "a beats b, b beats c, c beats a"). We then spawn a monster in the world with a randomly assigned type (e.g. "b goblin"), as well as an item corresponding to each type (e.g. "a", "b", and "c"). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure 9 shows an instance of Rock-paper-scissors. Reading models generalise to new environments. We split environment dynamics by permuting 3-character dependency graphs from an alphabet, which we randomly split into training and held-out sets. This corresponds to the "permutations" setting in Table 5. We train models on the 10 × 10 worlds from the training set and evaluate them on both seen and not seen during training. The left of Figure 10 shows the performance of models on worlds of varying sizes with training environment dynamics. In this case, the dynamics (e.g. dependency graphs) were seen during training. For 9 × 9 and 11 × 11 worlds, the world configuration not seen during training. For 10 × 10 worlds, there is a 5% chance that the initial frame was seen during training. 2 Figure 10 shows the performance on held-out environments not seen during training. We see that all models generalise to environments not seen during training, both when the world configuration is not seen (left) and when the environment dynamics are not seen (right). Reading models generalise to new concepts. In addition to splitting via permutations, we devise two additional ways of splitting environment dynamics by introducing new edges and nodes into the held-out set. Table 5 shows the three different settings. For each, we study the transfer behaviour of models on new environments. Figure 11 shows the learning curve when training a model on the held-out environments directly and when transferring the model trained on train environments to held-out environments. We observe that all models are significantly more sample-efficient when transferring from training environments, despite the introduction of new edges and new nodes. txt2π is more sample-efficient and learns better policies. In Figure 10, we see that the FiLM model outperforms the CNN model on both training environment dynamics and held-out environment dynamics. txt2π further outperforms FiLM, and does so more consistently in that the final performance has less variance. This behaviour is also observed in the in Figure 11. When training on the held-out set without transferring, txt2π is more sample efficient than FiLM and the CNN model, and achieves higher win-rate. When transferring to the held-out set, txt2π remains more sample efficient than the other models. 2 There are 24360 unique grid configurations given a particular dependency graph, 4060 unique dependency graphs in the training set, and 50 million frames seen during training. After training, the model finishes an episode in approximately 10 frames. Hence the probability of seeing a redundant initial frame is Below is a list of entities and modifiers contained in RTFM: Monsters: wolf, jackal, warg, ant, beetle, spider, jaguar, lynx, panther, goblin, bat, imp, shaman, ghost We collect human-written natural language templates for the goal and the dynamics. The goal statements in RTFM describe which team the agent should defeat. We collect 12 language templates for goal statements. The document of environment dynamics consists of two types of statements. The first type describes which monsters are assigned to with team. The second type describes which modifiers, which describe items, are effective against which element types, which are associated with monsters. We collection 10 language templates for each type of statements. The entire document is composed from statements, which are randomly shuffled. We randomly sample a template for each statement, which we fill with the monsters and team for the first type and modifiers and element for the second type.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJgob6NKvH
We show language understanding via reading is promising way to learn policies that generalise to new environments.
An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting. Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs. This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S, on a modified version of S (call it S) where the labels are randomized and observing that the training accuracy on S is very high, though, of course, the test accuracy is no better than chance . This leads to an important open question in the Deep Learning community (; ; ; ; ; ; ; ; , etc.): Among all maps that fit a real dataset, how does Gradient Descent (GD) find one that generalizes well? This is the question we address in this paper. We start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees. However, there is no mystery with trees: A typical tree construction algorithm splits the training set recursively into similar subsets based on input features. If no similarity is found, eventually, each example is put into its own leaf to achieve good training accuracy (but, of course, at the cost of poor generalization). Thus, trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset. Is it possible that something similar happens with GD? We believe so. The type of randomized-label experiments described above show that if there are common patterns to be found, then GD finds them. If not, it fits each example on a case-by-case basis. The question then is, what is it about the dynamics of GD that makes it possible to extract common patterns from the data? And what does it mean for a pattern to be common? Since the only change to the network parameters in GD comes from the gradients, the mechanism to detect commonality amongst examples must be through the gradients. We propose that this commonality detection can be explained as follows: 1. Gradients are coherent, i.e, similar examples (or parts of examples) have similar gradients (or similar components of gradients) and dissimilar examples have dissimilar gradients. 2. Since the overall gradient is the sum of the per-example gradients, it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up. 3. Since network parameters are updated proportionally to gradients, they change faster in the direction of stronger gradients. 4. Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few (or one example). For convenience, we refer to this as the Coherent Gradients hypothesis. It is instructive to work through the proposed mechanism in the context of a simple thought experiment. Consider a training set with two examples a and b. At some point in training, suppose the gradient of a, g a, can be decomposed into two orthogonal components g a1 and g a2 of roughly equal magnitude, i.e., there are two, equally good, independent ways in which the network can better fit a (by using say two disjoint parts of the network). Likewise, for b. Now, further suppose that one of the two ways is common to both a and b, i.e., say g a2 = g b2 = g ab, whereas, the other two are example specific, i.e., g a1, g b1 = 0. Now, the overall gradient is Observe that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example. It is important to emphasize that the notion of similarity used above (i.e., which examples are considered similar) is not a constant but changes in the course of training as network parameters change. It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent. We say "mostly" because even with random initialization, examples that are syntactically close are treated similarly (e.g., two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other). The relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability : strong gradient directions are more stable since the presence or absence of a single example does not impact them as much, as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set. With this observation, we can reason inductively about the stability of GD: since the initial values of the parameters do not depend on the training data, the initial function mapping examples to their gradients is stable. Now, if all parameter updates are due to strong gradient directions, then stability is preserved. However, if some parameter updates are due to weak gradient directions, then stability is diminished. Since stability (suitably formalized) is equivalent to generalization , this allows us to see how generalization may degrade as training progresses. Based on this insight, we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting. In addition to providing insight into why GD generalizes in practice, we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature: (a) Learning is slower with random labels than with real labels (b) Robustness to large amounts of label noise (c) Early stopping leads to better generalization (d) Increasing capacity improves generalization (; (e) The existence of adversarial initialization schemes (f) GD detects common patterns even when trained with random labels A direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training. Our approach, therefore, is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory. As part of these experiments, we replicate the observations (a)-(c) in the literature noted above, and analyze the corresponding explanations provided by Coherent Gradients (§2), and outline for future work how (d)-(f) may be accounted for (§5). In this paper, we limit our study to simple baselines: vanilla Stochastic Gradient Descent (SGD) on MNIST using fully connected networks. We believe that this is a good starting point, since even in this simple setting, with all frills eliminated (e.g., inductive bias from architecture or explicit regularization, or a more sophisticated optimization procedure), we are challenged to find a satisfactory explanation of why SGD generalizes well. Furthermore, our prior is that the difference between weak and strong directions is small at any one step of training, and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier. It also has the benefit of having a smaller carbon footprint and being easier to reproduce. Finally, based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly. Our first test of the Coherent Gradients hypothesis is to see what happens when we reduce similarity between examples. Although, at any point during training, we do not know which examples are similar, and which are different, we can (with high probability) reduce the similarity among training examples simply by injecting label noise. In other words, under any notion of similarity, adding label noise to a dataset that has clean labels is likely to make similar examples less similar. Note that this perturbation does not reduce coherence since gradients still depend on the examples. (To break coherence, we would have to make the gradients independent of the training examples which would requiring perturbing SGD itself and not just the dataset). For our baseline, we use the standard MNIST dataset of 60,000 training examples and 10,000 test examples. Each example is a 28x28 pixel grayscale handwritten digit along with a label ('0'-'9'). We train a fully connected network on this dataset. The network has one hidden layer with 2048 ReLUs and an output layer with a 10-way softmax. We initialize it with Xavier and train using vanilla SGD (i.e., no momentum) using cross entropy loss with a constant learning rate of 0.1 and a minibatch size of 100 for 10 5 steps (i.e., about 170 epochs). We do not use any explicit regularizers. We perturb the baseline by modifying only the dataset and keeping all other aspects of the architecture and learning algorithm fixed. The dataset is modified by adding various amounts of noise (25%, 50%, 75%, and 100%) to the labels of the training set (but not the test set). This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. Thus, when we add 25% label noise, we still expect about 75% + 0.1 * 25%, i.e., 77.5% of the examples to have unchanged (i.e. "correct") labels which we call the proper accuracy of the modified dataset. In what follows, we call examples with unchanged labels, pristine, and the remaining, corrupt. Also, from this perspective, it is convenient to refer to the original MNIST dataset as having 0% label noise. We use a fully connected architecture instead of a convolutional one to mitigate concerns that some of the difference in generalization between the original MNIST and the noisy variants could stem from architectural inductive bias. We restrict ourselves to only 1 hidden layer to have the gradients be as well-behaved as possible. Finally, the network width, learning rate, and the number of training steps are chosen to ensure that exactly the same procedure is usually able to fit all 5 variants to 100% training accuracy. Before looking at the experimental , it is useful to consider what Coherent Gradients can qualitatively say about this setup. In going from 0% label noise to 100% label noise, as per experiment design, we expect examples in the training set to become more dissimilar (no matter what the current notion of similarity is). Therefore, we expect the per-example gradients to be less aligned with each other. This in turn causes the overall gradient to become more diffuse, i.e., stronger directions become relatively weaker, and consequently, we expect it to take longer to reach a given level of accuracy as label noise increases, i.e., to have a lower realized learning rate. This can be made more precise by considering the following heuristic argument. Let θ t be the vector of trainable parameters of the network at training step t. Let L denote the loss function of the network (over all training examples). Let g t be the gradient of L at θ t and let α denote the learning rate. By Taylor expansion, to first order, the change ∆L t in the loss function due to a small gradient descent step h t = −α · g t is given by where · denotes the l 2 -norm. Now, let g te denote the gradient of training example e at step t. Since the overall gradient is the sum of the per-example gradients, we have, Now, heuristically, let us assume that all the g te are roughly the same and equal to g • t which is not entirely unreasonable (at least at the start of training, if the network has no a priori reason to treat different examples very differently). If all the per-example gradients are approximately orthogonal (i.e., g te, g te ≈ 0 for e = e), then where m is the number of examples. On the other hand, if they are approximately the same (i.e., g te, g te ≈ g . Thus, we expect that greater the agreement in per-example gradients, the faster loss should decrease. Finally, for datasets that have a significant fractions of pristine and corrupt examples (i.e., the 25%, 50%, and 75% noise) we can make a more nuanced prediction. Since, in those datasets, the pristine examples as a group are still more similar than the corrupt ones, we expect the pristine gradients to continue to align well and sum up to a strong gradient. Therefore, we expect them to be learned faster than the corrupt examples, and at a rate closer to the realized learning rate in the 0% label noise case. Likewise, we expect the realized learning rate on the corrupt examples to be closer to the 100% label noise case. Finally, as the proportion of pristine examples falls with increasing noise, we expect the realized learning rate for pristine examples to degrade. Note that this provides an explanation for the observation in the literature that that networks can learn even when the number of examples with noisy labels greatly outnumber the clean examples as long as the number of clean examples is sufficiently large since with too few clean examples the pristine gradients are not strong enough to dominate. Figure 1(a) and (b) show the training and test curves for the baseline and the 4 variants. We note that for all 5 variants, at the end of training, we achieve 100% training accuracy but different amounts of generalization. As expected, SGD is able to fit random labels, yet when trained on real data, generalizes well. The are in agreement with the qualitative predictions made above: 1. In general, as noise increases, the time taken to reach a given level of accuracy (i.e., realized learning rate) increases. 2. Pristine examples are learned faster than corrupt examples. They are learned at a rate closer to the 0% label noise rate whereas the corrupt examples are learned at a rate closer to the 100% label noise rate. 3. With fewer pristine examples, their learning rate reduces. This is most clearly seen in the first few steps of training by comparing say 0% noise with 25% noise. Using Equation 1, note that the magnitude of the slope of the training loss curve is a good measure of the square of the l 2 -norm of the overall gradient. Therefore, from the loss curves of Figure 1 (c), it is clear that in early training, the more the noise, the weaker the l 2 -norm of the gradient. If we assume that the per-example l 2 -norm is the same in all variants at start of training, then from Equation 2, it is clear that with greater noise, the gradients are more dissimilar. Finally, we note that this experiment is an instance where early stopping (e.g.,) is effective. Coherent gradients and the discussion in §2.2 provide some insight into this: Strong gradients both generalize well (they are stable since they are supported by many examples) and they bring the training loss down quickly for those examples. Thus early stopping maximizes the use of strong gradients and limits the impact of weak gradients. (The experiment in the §3 discusses a different way to limit the impact of weak gradients and is an interesting point of comparison with early stopping.) Within each noisy dataset, we expect the pristine examples to be more similar to each other and the corrupt ones to be less similar. In turn, based on the training curves (particularly, Figure 1 (d) ), during the initial part of training, this should mean that the gradients from the pristine examples should be stronger than the gradients from the corrupt examples. We can study this effect via a different decomposition of square of the l 2 -norm of the gradient (of equivalently upto a constant, the change in the loss function):, and based on the foregoing, we expect the pristine fraction to be a larger fraction of the total when training starts and to diminish as training progresses and the pristine examples are fitted. The first row of Figure 2 shows a plot of estimates of f p t and f c t for 25%, 50% and 75% noise. These quantities were estimated by recording a sample of 400 per-example gradients for 600 weights (300 from each layer) in the network. We see that for 25% and 50% label noise, f p t initially starts off higher than f c t and after a few steps they cross over. This happens because at that point all the pristine examples have been fitted and for most of the rest of training the corrupt examples need to be fitted and so they largely contribute to the l 2 -norm of the gradient (or equivalently by Equation 1 to loss reduction). Only at the end when the corrupt examples have also been fit, the two curves reach parity. In the case of 75% noise, we see that the cross over doesn't happen, but there is a slight slope downwards for the contribution from pristine examples. We believe this is because of the sheer number of corrupt examples, and so even though the individual corrupt example gradients are weak, their sum dominates. To get a sense of statistical significance in our hypothesis that there is a difference between the pristine and corrupt examples as a group, in the remaining rows of Figure 2, we construct a null world where there is no difference between pristine and corrupt. We do that by randomly permuting the "corrupt" and "pristine" designations among the examples (instead of using the actual designations) and reploting. Although the null pristine and corrupt curves are mirror images (as they must be even in the null world since each example is given one of the two designations), we note that for 25% and 50% they do not cross over as they do with the real data. This increases our confidence that the null may be rejected. The 75% case is weaker but only the real data shows the slight downward slope in pristine which none of the nulls typically show. However, all the nulls do show that corrupt is more than pristine which increases our confidence that this is due to the significantly differing sizes of the two sets. (Note that this happens in reverse in the 25% case: pristine is always above corrupt, but they never cross over in the null worlds.) To get a stronger signal for the difference between pristine and corrupt in the 75% case, we can look at a different statistic that adjusts for the different sizes of the pristine and corrupt sets. Let |p| and |c| be the number of pristine and corrupt examples respectively. Define remaining rows show the same plots in null worlds where we randomly permute the pristine or corrupt designations of the examples. The appear somewhat significant but not overwhelmingly so. It would be interesting to redo this on the entire population of examples and trainable parameters instead of a small sample. In the second test of the Coherent Gradients hypothesis, we change GD itself in a very specific (and to our knowledge, novel) manner suggested by the theory. Our inspiration comes from random forests. As noted in the introduction, by building sufficiently deep trees a random forest algorithm can get perfect training accuracy with random labels, yet generalize well when trained on real data. However, if we limit the tree construction algorithm to have a certain minimum number of examples in each leaf, then it no longer overfits. In the case of GD, we can do something similar by suppressing the weak gradient directions. Our baseline setup is the same as before (§2.1) but we add a new dimension by modifying SGD to update each parameter with a "winsorized" gradient where we clip the most extreme values (outliers) among all the per-example gradients. Formally, let g we be the gradient for the trainable parameter w for example e. The usual gradient computation for w is The change to gradient descent is to simply use g c w instead of g w when updating w at each step. Note that although this is conceptually a simple change, it is computationally very expensive due to the need for per-example gradients. To reduce the computational cost we only use the examples in the minibatch to compute l w and u w. Furthermore, instead of using 1 hidden layer of 2048 ReLUs, we use a smaller network with 3 hidden layers of 256 ReLUs each, and train for 60,000 steps (i.e., 100 epochs) with a fixed learning rate of 0.1. We train on the baseline dataset and the 4 noisy variants with c ∈ {0, 1, 2, 4, 8}. Since we have 100 examples in each minibatch, the value of c immediately tells us how many outliers are clipped in each minibatch. For example, c = 2 means the 2 largest and 2 lowest values of the per-example gradient are clipped (independently for each trainable parameter in the network), and c = 0 corresponds to unmodified SGD. If the Coherent Gradient hypothesis is right, then the strong gradients are responsible for making changes to the network that generalize well since they improve many examples simultaneously. On the other hand, the weak gradients lead to overfitting since they only improve a few examples. By winsorizing each coordinate, we suppress the most extreme values and thus ensure that a parameter is only updated in a manner that benefits multiple examples. Therefore: • Since c controls which examples are considered extreme, the larger c is, the less we expect the network to overfit. • But this also makes it harder for the network to fit the training data, and so we expect the training accuracy to fall as well. • Winsorization will not completely eliminate the weak directions. For example, for small values of c we should still expect overfitting to happen over time though at a reduced rate since only the most egregious outliers are suppressed. The ing training and test curves shown in Figure 4. The columns correspond to different amounts of label noise and the rows to different amounts of winsorization. In addition to the training and test accuracies (ta and va, respectively), we show the level of overfit which is defined as ta − [· 1 10 + (1 −) · va] to account for the fact that the test labels are not randomized. We see that the experimental are in agreement with the predictions above. In particular, • For c > 1, training accuracies do not exceed the proper accuracy of the dataset, though they may fall short, specially for large values of c. • The rate at which the overfit curve grows goes down with increasing c. Additionally, we notice that with a large amount of winsorization, the training and test accuracies reach a maximum and then go down. Part of the reason is that as a of winsorization, each step is no longer in a descent direction, i.e., this is no longer gradient descent. Although there has been a lot of work in recent years in trying to understand generalization in Deep Learning, no entirely satisfactory explanation has emerged so far. There is a rich literature on aspects of the stochastic optimization problem such as the loss landscape and minima (e.g., ;), the curvature around stationary points (e.g., ; ; ;), and the implications of stochasticity due to sampling in SGD (e.g.,). However, we believe it should be possible to understand generalization without a detailed understanding of the optimization landscape. For example, since stopping early typically leads to small generalization gap, the nature of the solutions of GD (e.g., stationary points, the limit cycles of SGD at equilibrium) cannot be solely responsible for generalization. In fact, from this observation, it would appear that an inductive argument for generalization would be more natural. Likewise, there is reason to believe that stochasticity is not fundamental to generalization (though it may help). For example, modifying the experiment in §2.1 to use full batch leads to similar qualitative generalization . This is consistent with other small scale studies (e.g., Figure 1 of) though we are not aware of any large scale studies on full batch. Our view of optimization is a simple, almost combinatorial, one: gradient descent is a greedy search with some hill-climbing thrown in (due to sampling in SGD and finite step size). Therefore, we worry less about the quality of solutions reached, but more about staying "feasible" at all times during the search. In our context, feasibility means being able to generalize; and this naturally leads us to look at the transition dynamics to see if that preserves generalizability. Another approach to understanding generalization, is to argue that gradient-based optimization induces a form of implicit regularization leading to a bias towards models of low complexity. This is an extension of the classical approach where bounding a complexity measure leads to bounds on the generalization gap. As is well known, classical measures of complexity (also called capacity) do not work well. For example, sometimes adding more parameters to a net can help generalization (see for e.g. ;) and, as we have seen, VC-Dimension and Rademacher Complexity-based bounds must be vacuous since networks can memorize random labels and yet generalize on real data. This has led to a lot of recent work in identifying better measures of complexity such as spectrally-normalized margin , path-based group norm, a compression-based approach , etc. However, to our knowledge, none of these measures is entirely satisfactory for accounting for generalization in practice. Please see for an excellent discussion of the challenges. We rely on a different classical notion to argue generalization: algorithmic stability (see for a historical overview). We have provided only an informal argument in Section 1, but there has been prior work by in looking at GD and SGD through the lens of stability, but their formal do not explain generalization in practical settings (e.g., multiple epochs of training and non-convex objectives). In fact, such an attempt appears unlikely to work since our experimental imply that any stability bounds for SGD that do not account for the actual training data must be vacuous! (This was also noted by .) That said, we believe stability is the right way to think about generalization in GD for a few reasons. First, since by stability, suitably formalized, is equivalent to generalization. Therefore, in principle, any explanation of generalizability for a learning problem must-to borrow a term from category theory-factor through stability. Second, a stability based analysis may be more amenable to taking the actual training data into account (perhaps by using a "stability accountant" similar to a privacy accountant) which appears necessary to get non-vacuous bounds for practical networks and datasets. Finally, as we have seen with the modification in §3, a stability based approach is not just descriptive but prescriptive 1 and can point the way to better learning algorithms. The work of is particularly relevant. They compute the Fourier spectrum of ReLU networks and argue based on heuristics and experiments that these networks learn low frequency functions first. In contrast, we focus not on the function learnt, but on the mechanism in GD to detect commonality. This leads to a perspective that is at once simpler and more general (for e.g., it applies equally to networks with other activation functions, with attention, LSTMs, and discrete (combinatorial) inputs). Furthermore, it opens up a path to analyzing generalization via stability. It is is not clear if claim a causal mechanism, but their analysis does not suggest an obvious intervention experiment such as ours of §3 to test causality. There are other experimental that show biases towards linear functions and functions with low descriptive complexity but these papers do not posit a causal mechanism. It is interesting to consider if Coherent Gradients can provide a unified explanation for these observed biases. (concurrent submission) propose a descriptive statistic stiffness based on pairwise per-example gradients and show experimentally that it can be used to characterize generalization. (also concurrent submission) independently propose a very similar statistic called gradient confusion but use it to study the speed of training. Unlike our work, these do not propose causal mechanisms for generalization, but these statistics (which are rather different from those in §2.4) could be useful for the further study of Coherent Gradients. Does the Coherent Gradients hypothesis hold in other settings such as BERT, ResNet, etc.? For that we would need to develop more computationally efficient tests. Can we use the state of the network to explicitly characterize which examples are considered similar and study this evolution in the course of training? We expect non-parametric methods for similarity such as those developed in and their characterization of "easy" examples (i.e., examples learnt early as per) as those with many others like them, to be useful in this context. Can Coherent Gradients explain adversarial initializations ? The adversarial initial state makes semantically similar examples purposefully look different. Therefore, during training, they continue to be treated differently (i.e., their gradients share less in common than they would if starting from a random initialization). Thus, fitting is more case-by-case and while it achieves good final training accuracy, it does not generalize. Can Coherent Gradients along with the Lottery Ticket Hypothesis explain the observation in that wider networks generalize better? By Lottery Ticket, wider networks provide more chances to find initial gradient directions that improve many examples, and by Coherent Gradients, these popular hypotheses are learned preferentially (faster). Can we use the ideas behind Winsorized SGD from §3 to develop a computationally efficient learning algorithm with generalization (and even privacy) guarantees? How does winsorized gradients compare in practice to the algorithm proposed in for privacy? Last, but not least, can we use the insights from this work to design learning algorithms that operate natively on discrete networks?
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryeFY0EFwS
We propose a hypothesis for why gradient descent generalizes based on how per-example gradients interact with each other.
Recent advances in deep learning have shown promising in many low-level vision tasks. However, solving the single-image-based view synthesis is still an open problem. In particular, the generation of new images at parallel camera views given a single input image is of great interest, as it enables 3D visualization of the 2D input scenery. We propose a novel network architecture to perform stereoscopic view synthesis at arbitrary camera positions along the X-axis, or Deep 3D Pan, with "t-shaped" adaptive kernels equipped with globally and locally adaptive dilations. Our proposed network architecture, the monster-net, is devised with a novel t-shaped adaptive kernel with globally and locally adaptive dilation, which can efficiently incorporate global camera shift into and handle local 3D geometries of the target image's pixels for the synthesis of naturally looking 3D panned views when a 2-D input image is given. Extensive experiments were performed on the KITTI, CityScapes and our VXXLXX_STEREO indoors dataset to prove the efficacy of our method. Our monster-net significantly outperforms the state-of-the-art method, SOTA, by a large margin in all metrics of RMSE, PSNR, and SSIM. Our proposed monster-net is capable of reconstructing more reliable image structures in synthesized images with coherent geometry. Moreover, the disparity information that can be extracted from the "t-shaped" kernel is much more reliable than that of the SOTA for the unsupervised monocular depth estimation task, confirming the effectiveness of our method. Recent advances in deep learning have pushed forward the state-of-the-art performance for novel view synthesis problems. Novel view synthesis is the task of generating a new view seen from a different camera position, given a single or multiple input images, and finds many applications in robotics, navigation, virtual and augmented reality (VR/AR), cinematography, etc. In particular, the challenging task of generating stereo images given a single input view is of great interest as it enables 3D visualization of the 2D input scene. In addition, the falling price and the increasing availability of the equipment required for VR/AR has fueled the demand for stereoscopic contents. The previous works, such as the Deep3D , have addressed the right-view generation problem in a fully supervised fashion when the input is the left-view to which the output is the synthetic right-view at a fixed camera shift. In contrast, our proposed Deep 3D Pan pipeline enables the generation of new views at arbitrary camera positions along the horizontal X-axis of an input image with far better quality by utilizing adaptive "t-shaped" convolutions with globally and locally adaptive dilations, which takes into account the camera shift amount and the local 3D geometries of the target pixels. Panning at arbitrary camera positions allows our proposed model to adjust the baseline (distance between cameras) for different levels of 3D sensation. Additionally, arbitrary panning unlocks the possibility to adjust for different inter-pupillary distances of various persons. Figure 1 shows some generated left and right view images for a given single image input by our proposed Deep 3D Pan pipeline, which we call it the "monster-net" (monocular to stereo network). In this paper, we define "pan" in the context of 3D modeling, implying that camera movement is in parallel to the center view plane. In the following sections, we review the related works to stereoscopic view synthesis and discuss the differences with our proposed method, followed by the formulation of our Deep 3d Pan pipeline and finally, we present outstanding on various challenging stereo datasets, showing superior performance against the previous state-of-the-art methods. Novel view synthesis is a well-studied problem in deep learning-based computer vision, and has already surpassed the classical techniques for both cases of the multiple-image (; ;) and single-image input . The latter, single-image based novel view synthesis, is known to be a much more complex problem compared to multiple-image based ones. Previous deep learning-based approaches usually tend to utilize one of the two techniques to generate a novel view: (i) optical flow guided image warping, and (ii) a "flavor" of kernel estimation, also known as adaptive convolutions. The first technique, optical flow guided image warping, has been widely used by many researchers to indirectly train convolutional neural networks (CNNs) to estimate optical flow or disparity from single or stereo images in an unsupervised fashion, but its final goal was not to synthesize new views. These works include those of (; ; b; ; ; b; ;). However, not all methods have used flow-guided warping to do unsupervised training or to regularize supervised methods for flow estimation. The work of implements plane sweep at the feature level to generate a cost volume for multi-view stereo depth estimation. Plane sweep can be seen as a type of 1D convolution, similar to the 1D kernel utilized in . On the other hand, the second approach, kernel estimation or adaptive convolutions, has proved to be a superior image synthesis technique and has been executed in different ways by many authors. For example: in their early DeepStereo work formulated a CNN capable of synthesizing a middle view by blending multiple plane-swept lateral view inputs weighted by a "selection volume", which can be considered a 1D (or line-shaped) adaptive convolution; in a similar way, devised the Deep3D, a non fully-convolutional network that estimates a series of "probabilistic disparity maps" that are then used to blend multiple shifted versions of the left-view input image to generate a synthetic right-view; The adaptive separable convolutions (SepConv) in the work of approximated adaptive 2D convolutions by two (vertical and horizontal) 1D kernels that are applied sequentially to the input t 0 and t 1 frames for the video interpolation problem; In the works of , although with additional considerations, their multiplane image representation approach can be loosely un- derstood as a 1D adaptive convolution as the final operation involves the reduction of a plane sweep volume; Geometric-aware networks in the work of indirectly achieved adaptive convolutions by learning a fixed number of affine transformations on an input image, where the ing affine-transformed images are then blended together to generate one output image; and finally, in the work of Gonzalez & Kim (2019a), the authors developed the Deep 3D Zoom Net, which estimates a selection volume for the "blending of multiple upscaled versions of the input image", which can be treated as an special case of a 1D adaptive convolution. The DeepStereo and the multiplane image approaches require two or more images as inputs, thus, greatly reducing the complexity of the synthesis task as most ambiguities are removed by counting on multiple views. In our work, we focus on the single-image based stereoscopic view synthesis task, which is a far more difficult problem as the network needs to understand the 3D geometry in the scene, and to handle complex occlusions, ambiguities and non-Lambertian surfaces. Although the aforementioned methods are distinguished one another, as the different synthesis techniques have their own properties, they can be all interpreted as belonging to a category of adaptive convolutions which are visualized in Figure 2. As observed in Figure 2 -(a), DeepStereo and Deep3D share a same shape of kernels, that is, a 1D horizontal-only kernel that samples the pixels at a fixed interval or dilation along the X-axis for all target output pixels. A 1D horizontal-only constantdilation kernel suffers from three major drawbacks: 1. Inefficient usage of kernel values. When sampling the positions opposite to the camera movement (which are the pixel locations corresponding to a 1 -a 3 in Figure 2 -(a) assuming a rightward camera shift), experiments showed that these kernel values would often be zeros. The same effect repeats when sampling the positions further away from the maximum disparity value of the given scene (which corresponds to the pixel location at a 7, assuming that the maximum disparity is 2 and the dilation is 1) as the network is not able to find valid stereo correspondences for these kernel positions; 2. Right-view synthesis is limited to the trained baseline (distance between stereo cameras), as the models over-fit to a specific training dataset with a fixed baseline; and 3. The 1D line kernel has limited occlusion handling capabilities, as the network will try to fill in the gaps with the information contained only along the horizontal direction, limiting the reconstruction performance of the models on the occluded areas. In contrast, the kernels predicted by the geometric-aware networks have deformable structures adaptive to the given input images. However, only one deformed kernel shape is predicted and shared to synthesize all target output pixels, leading to limited performance. Another drawback of the geometric-aware networks is their complexity as they require three sub-networks and a super-pixel segmentation step as pre-processing, hindering the processing of high-resolution images. For the Deep 3D Zoom Net case, the kernel tends to point to the center of the image, as it performs a blending operation of multiple upscaled versions of the input image. The kernel dilation size of the Deep 3D Zoom Net is adaptive according to the zoom factor applied to the input image, which allows for the generation of arbitrary 3D-zoomed output images. Finally, for the video interpolation case, the SepConv approximates an NxN adaptive kernel via a 1xN and an Nx1 component which are sequentially applied to the input images to generate the output. SepConv has, by design, limited receptive fields, as the dilation size is fixed to 1. Besides, the sequential nature of the kernel forces the vertical component to sample pixels from the output of the horizontal convolution, which could be already degraded due to heavy deformations introduced by the horizontal component. Recent works have also attempted to improve upon the stereoscopic view synthesis by improving the loss functions used to train the CNNs. The work of proposed a multi-scale adversarial correlation matching (MS-ACM) loss that learns to penalize structures and ignore noise and textures by maximizing and minimizing the correlation-l 1 distance in the discriminator's featurespace between the generated right-view and the target-view in an adversarial training setup. Whereas the objective function is a key factor in training any CNN, we believe that, at its current state, the stereoscopic view synthesis problem can benefit more from a better pipeline that can handle the previously mentioned issues and using the widely accepted l 1 and perceptual losses for image reconstruction, rather than a more complex loss function. Our proposed dilation adaptive "t-shaped" convolutions incorporate global (new camera position along the X-axis) and local (3D geometries of specific target pixels) information of the input scene into the synthesis of each output pixel value by not only learning the specific kernel that will generate each output pixel, but also by learning the proper dilation value for each kernel. The "t" shape of the kernel allows the network to account for occlusions by filling-in the gaps (missing information in the output) due to shifted camera positions using not only left-and-right pixels (like DeepStereo and Deep3D), but also up-and-down neighboring pixel information. In addition, the notions of global and local dilations allow our proposed monocular to stereo network, the monster-net, to generate arbitrarily 3D panned versions of the input center view along the X-axis, a useful feature not present in previous works that allows adjusting for eye-to-eye separation and/or level of 3D sensation. In order to effectively synthesize an arbitrary 3D panned image, we propose a global dilation filter as shown in Figure 3. Our proposed cross-shaped global dilation filter where T c (x, y) is the filter parameter value of T d (p) at the center location p. The upper, bottom, left and right wing parameters (T u, T b, T l, T r) of the cross-shaped dilation (d) filter are defined as where n u, n b, n l and n r indicate the numbers of filter parameters in T u, T b, T l, and T r, respectively. For the cross-shaped dilation filter shown in Figure 3, it is more appropriate to have a longer length of Figure 4: Our proposed "t-shaped" kernels are overlaid on top of a center input image. The distance between samples (dilation) is adaptive according to the amount and direction of 3D panning to be applied to the input image and the local 3D geometry of the scene. the right (left) filter wing than the other three wings when the camera panning is rightward (leftward), as it allows capturing more useful information for the synthesis of a right (left) panned image. In this case, n r (n l) is set to be greater than n l (n r), n u and n b, such that the global dilation filter showed in Figure 3 can be elaborated as a "t-shaped" kernel which can then take into account the camera panning direction for synthesis. Figure 4 shows examples of "t-shaped" kernels overlaid on top of an input center image. As shown in Figure 4 -(a), the "t-shaped" kernel has a longer left wing of filter parameters for the synthesis of a leftward camera panning while in Figure 4 -(b) it shows a longer right-wing of filter parameters for the synthesis of a rightward camera panning. Why "t" shape? Experiments with symmetric kernel shapes (e.g., "+" shape) were performed first, but it was noted that most of the elements on the left (right), upper and bottom sides against the centered red dot of the kernel tended to have very small values close to zeros for most target pixels for the rightward (leftward) movement of a camera. Similar to SepConv, the experiments with a horizontal kernel applied first followed by a vertical kernel were performed, yielding poor . It was discovered that the "t" shape is more efficient than the "+" shape as it picks up more effective sampling positions with a fewer parameters than the standard adaptive convolutions such as those in . As depicted in Figure 5, the "t-shaped" kernels can embed useful information like disparity and occlusion from a monocular image into the stereo synthesis process. The longer right (left) wing of the "t-shaped" kernel contains disparity information, as it will try to sample pixels from the right (left) side to the target position when the camera is assumed to move in the rightward (leftward) direction. Figure 5 -(a) depicts a primitive disparity map D p that was constructed by the weighted sum of the kernel values in the longer wing part as described by where T r (x + id, y) is the i-th value of the longer wing T r at pixel location p = (x, y) for the rightward 3D panning of an input center image I c. Note that D p is normalized in the range. Interestingly, as shown in Figure 5 -(a), the generated disparity map looks very natural and appropriate, which implies the effectiveness of our "t-shaped" kernel approach. The short left (right), upper and bottom wings of the "t-shaped" kernel contain occlusion information, as the network will try to fill in the gaps utilizing surrounding information that is not present in the long part of the "t-shaped" kernel. It is also interesting to see the occlusion map in Figure 5 -(b) where a primitive rightward occlusion map O r p was constructed by summing up the "t-shaped" kernel values in the short wing parts according to the following: The bright regions or spots in Figure 5 -(b) indicate the occlusions due to the camera shift along the horizontal axis of the input center image, which are likely to happen for the case of the camera's rightward panning. For both Equations and, the primitive disparity and occlusion maps for the leftward panning case can be obtained by swapping the r and l indices. ) maps generated from the proposed "t-shaped" kernel. In general, the disparity amounts between stereo images are variable at different pixel locations according to the distance between stereo cameras and the local scene geometries. Therefore, it is necessary to take into account the variable disparity in synthesizing a new view in a globally and locally adaptive fashion. For this, a "t-shaped" kernel is introduced with a controllable dilation factor by which both camera shift and local changes in image geometry can be effectively taken into account when synthesizing a new (left or right) view for the input center image. Any kernel with a fixed dilation may cause a limited accuracy in synthesizing a novel view because the disparity amounts vary over the whole image according to the cameras' baseline and the local geometries. So, our "t-shaped" kernel is proposed to make the synthesis of novel views not only globally, but locally adaptive to the camera shift and its local changes in image geometry by controlling its dilation size per-pixel in the output image. Globally, a short dilation value is more appropriate when slightly shifting the camera, while a high dilation value is desirable when largely shifting the camera position. In a local manner, a small dilation value is appropriate for far-away objects from the camera while very close objects to the camera can be better reconstructed with a larger dilation value. We define the global dilation g d as the pixel distance between two consecutive kernel sampling positions, which is given by the pan amount P a to be applied to the input center image I c divided by the total number of filter parameters in the longer "t-shaped" kernel wing (n l or n r). P a has a unit of pixels mapped in the image corresponding to the camera shift into the left or right direction and takes on floating numbers. Therefore, the global dilation g d is given by where P a takes on positive (negative) values for the rightward (leftward) panning scenario. The pan amount needed to generate a left-view or a right-view is determined during training according to the closest possible objects to the camera. The "closest possible objects" vary over different training datasets. For our novel view synthesis task, like in (; b), we assume the KITTI dataset to have a maximum or "closest possible object" disparity of 153 pixels. During training, P a is set to 153 and -153 for the rightward and leftward panning, respectively. While global dilation allows the "t-shaped" kernel to take into account the global camera shift, a locally adaptive mechanism is needed to synthesize new views of locally variable disparity. Such a mechanism is realized by first generating multiple images with the "t-shaped" kernel at N different dilations and blending them per-pixel in a locally adaptive manner. The blending is a weighted sum of filtered images by the "t-shaped" kernel with N different dilations, where the blending weights (w 1, w 2, . . ., w N) control the local dilation per-pixel and are learned via a convolutional neural network (CNN) along with the parameter values of the "t-shaped" kernel. Let |g d | be the maximum dilation value that is a fractional number. filtered images according to the corresponding blending weights (w 1, w 2, . . ., w N). Based on the N different global dilations, the output image value I t o (p) at a pixel location p can be calculated as where indicates a blending weight for the i-th global dilation. We propose an end-to-end trainable CNN, called the "monster-net" (monocular to stereo net). The monster-net is made of two main building blocks, a novel view synthesis network, the "t-net", and a resolution restoration block, the "sr-block". Given an input center image I c and pan amount P a, the final output panned image I o is obtained by sequentially applying the aforementioned modules by where θ t and θ sr parameterize the t-net and the sr-block respectively. {I n cs} is the stack of progressively shifted-downscaled versions of the input center image I c described in the SR-BLOCK section. The t-net. The "t-net" estimates both the "t-shaped" global dilation kernel parameters and the adaptive local dilation weights. The t-net is designed to have large receptive fields to synthesize detailed image structures of a new view image which corresponds to a shifted camera position. This is because such a large receptive field is useful in capturing the global image structure and contextual information for a new view image to be synthesized. For this, an auto-encoder with skip connections (not a U-net structure) is adopted, which allows to effectively have very large receptive fields and to efficiently fuse global and local (fine details) information on the decoder stage. For better feature extraction, we adopt the residual connections in the encoder side as proposed by (b). The t-net estimates all necessary values to perform the operation described by Equation. The t-net, depicted in Figure 6, has two output branches: the first output branch yields 81 channels, where the first 49 are horizontal parameter maps and the following 32 vertical parameter maps; the second output branch generates the 3-channel blending weight maps for the local adaptive dilation. That is, each channel-wise vector at a pixel location for the first output branch corresponds to the t-kernel parameter values [T c, T, and each channel-wise vector for the second output branch corresponds to the blending weights [w 1, w 2, . . ., w N] for local dilations in Equation. As our t-net is devised to generate arbitrarily panned novel views, feeding the pan amount as a 1-channel constant feature map (P a (p) = P a ∀ p) helps the network take into account the varying pan direction and the amount of occlusions on the 3D panned output. The effect of feeding the pan amount is further discussed in appendix A-1. Super resolution (SR) block. As generating a full resolution dilation-adaptive t-kernel would be computationally too expensive, we propose to estimate it at a low resolution (LR) to synthesize a novel view of the half-resolution, and then to apply deep learning based SR techniques to bring the LR novel view to the high (or original) resolution (HR). In comparison, in Deep3D and SepConv, Figure 7: (a) Shifted-LR versions of the center-view contain different information as they are sampled from different groups of pixels via bilinear interpolation depending on the stride (controlled by the maximum disparity). (b) Our light sr-block. All convs have 3x3 kernels otherwise specified. the estimated LR kernel is upscaled with conventional methods to the HR and then applied to the input image(s), which is a costly operation as it is carried out in the HR dimensions and can lead to blurred areas as the kernel is just bilinearly interpolated. In our proposed pipeline, instead of utilizing common single image SR methods like (; ;), we propose to apply a stereo-SR method. The stereo-SR technique in takes a LR stereo pair (left and right views) as input and progressively shifts the right-view producing a stack that is concatenated with the left-view and later processed by a CNN to obtain the super-resolved left-view. This process is made at an arbitrary and fixed stride (e.g. 1 pixel at every step of the stack) and does not take into account the maximum disparity between the input views. For our Deep 3D Pan pipeline, we propose to use the maximum disparity prior that can be obtained from the long wing of the t-kernel to dynamically set the shifting stride. Additionally, instead of interpolating and processing the low resolution panned view I t o (p) on the HR dimensions, we progressively shift and then downscale the high-resolution center view I c by a factor of x2. This allows our sr-block to operate on the LR dimensions without performance degradation, as high frequency information in the horizontal axis is not lost but distributed along the levels of the shifted center view stack as depicted in Figure 7 -(a). Our sr-block, depicted in Figure 7 -(b), is a simple, yet effective module that takes as input the LR I t o view and the shifted-downscaled center view stack I n cs described by where g(I, s) is an s-strided horizontal-shift and 2x down-scaling operator applied on image I. The stride s can take any real number and the ing image is obtained via bilinear interpolation. N s is the depth of the stack, and was set to N s = 32 for all our experiments). The stack is concatenated with the LR I t o and passed trough four Conv-ReLU layers followed by a residual connection as shown in Figure 7 -(b). The final step up-scales the ing features into the target resolution via nearest interpolation followed by a convolutional layer. The last layer reduces the number of channels to three for the final RGB output I o. Nearest upscaling was adopted as it yields no checkerboard artifacts in contrast with transposed or sub-pixel convolutions . Published as a conference paper at ICLR 2020 To demonstrate the effectiveness of our "t-shaped"-dilation-adaptive kernel, we performed several experiments on the challenging KITTI2012 , KITTI2015 , and CityScapes datasets. As these stereo datasets only consist of outdoor scenes, we also performed experiments on our indoors dataset, called the VICLAB STEREO dataset. Surprisingly, to our best knowledge, this is the first stereo dataset available that focuses on the indoor scene, which is planned to be publicly available for research. Additionally, our formulation of global and local adaptive dilations allows our monster-net to be trained on multiple stereo datasets at the same time, even if these have different baselines. Instead of over-fitting on a single camera baseline like the previous methods (Xie et), our monster-net can build knowledge when simultaneously trained on many datasets. To our best knowledge, our Deep 3D Pan pipeline is the first method designed to be trained on multiple baseline datasets concurrently for the stereoscopic view synthesis problem where unsupervised monocular depth estimation is even used particularly. For more details about the datasets and multi-dataset training, please see the appendix A-3. We compare our monster-net against the stereoscopic view synthesis SOTA: Deep3D and a version of SepConv modified for right-view synthesis. Firstly, for a fair comparison, the backbone convolutional auto-encoders for the Deep3D and SepConv were set up to be equivalent to our t-net's, that is, a six-stage encoder-decoder with skip connections and residual blocks in the encoder side. Secondly, we compare our monster-net with Deep3D-B, a "Bigger" version of Deep3D, where, instead of 32 elements in the 1D kernel as in its original work, we use 49 elements to match the number of horizontal kernel values in our t-net. Thirdly, we compare against SepConv-D, a dilated version of the SepConv such that the receptive field of the separable convolutions has a size of 153x153. The Deep3D and the SepConv models are trained without using perceptual loss as in their original works. For a more meaningful comparison, the Deep3D-B and the SepConv-D are trained with a combination of l 1 and perceptual loss l p , and demonstrate that a better loss function than l 1 does not contribute enough to the stereoscopic view synthesis problem. For more implementation details, reefer to the appendix A-4. Additionally, we compare the quality of the embedded disparity in the long wing of the "t-shaped" kernel with those of the state-of-the-art models for the monocular depth estimation task. For that, we first define a disparity refinement sub-network that uses the primitive disparity obtained from the long wing of the "t-shaped" kernel as prior information. Secondly, we define a special postprocessing (spp) step, which, instead of relying on a naive element wise summation as in , takes into account the ambiguities of the first and second forward passes to generate a remarkable sharp and consistent disparity map. For more details on the refinement block and our special post-processing, reefer to the appendix A-2. Table 1 shows the performance comparison for our method and previous works. It is important to mention that our monster-net performs inference on full resolution images, while the previous approaches for single-view novel view synthesis perform estimation on reduced resolution inputs. Our method outperforms the Deep3D baseline by a considerable margin of 0.7dB in PSNR, 2.0 in RMSE, and 0.03 in SSIM. The qualitative are shown in Figure 8. Our method produces superior looking images. In Deep3D and SepConv, many objects appear too blurred such that their boundaries can hardly be recognized in the synthetic images (e.g the motorcycle, persons, traffic signs, etc.). We challenged the models trained on KITTI (K) to perform inference on the CityScapes validation split (CS), and observed that our method generalizes much better than the Deep3D baseline with up to 3dB higher in PSNR. When training the monster-net with K+CS, we get an additional improvement of 4dB PSNR in the validation CS dataset. Incorporating an indoor dataset to our training pipeline is also possible, making our network applicable to a wide variety of scenarios. We added the VI-CLAB STEREO (VL) dataset to the training, that is K+CS+VL, and observed little impact on the K dataset performance as shown in Table 1. We also tested the performance of our monster-net on the validation split of the VL dataset. We observed that our full monster-net trained on K+CS achieved a mean PSNR of 19.92dB, while achieving a mean PSNR of 21.78 dB when trained on K+CS+VL. For a network trained on the outdoors dataset only it is difficult to generalize to the indoors case, as the latter contains mainly homogeneous areas, whereas the outdoors case mainly contains texture rich scenes. Visualizations on CS and VL, and ablation studies that prove the efficacy of each of our design choices can be found in the appendices A-5, A-6 and A-8. With the addition of a relatively shallow disparity refinement sub-network, the monster-net remarkably outperforms all the state-of-the-art models for the unsupervised monocular depth estimation task, as shown in Table 2. Our monster-net with disparity refinement even outperforms the supervised monocular disparity estimation methods such as and multiple view unsupervised methods such as (a;). We presented an adaptive "t-shaped" kernel equipped with globally and locally adaptive dilations for the Deep 3D Pan problem, defined as the task of arbitrarily shifting the camera position along the X-axis for stereoscopic view synthesis. Our proposed monster-net showed superior performance to the SOTA for right-view generation on the KITTI and the CityScapes datasets. Our monsternet also showed very good generalization capabilities with 3dB gain in PSNR against the Deep3D baseline. In addition, our method presents no-discontinuities, consistent geometries, good contrast, and naturally looking left or right synthetic panned images. Our monster-net can be extended for image registration, monocular video to stereo video, and generation of novel views at any camera translation by just allowing pixel-wise rotation of our "t-shaped" kernel. The larger the pan amount, the greater the occlusions to be handled in the synthetic output image. The effect of feeding the pan amount P a as a one-channel constant feature to the t-net can be visualized in Figure 9, where multiple primitive disparity and occlusion maps are depicted for different pan amounts. As shown in Figure 9, the network generates different maps for different magnitudes and directions of P a while keeping the input center image I c unchanged, confirming the effect of the pan amount as prior knowledge to the network. The difference between the disparity maps can be appreciated in the "holes" or "shadows" casted in the objects borders, as they represent the occluded content seen from the output 3D panned image. In the red box in Figure 9 it is observed that the shadows casted by leftward and rightward camera panning appear in opposite sides ob the objects. In the yellow box, it is observed that the larger the P a, the larger the shadows projected, as more occlusions are to be handled. The disparity refinement network architecture, as depicted in Figure 10, has two input branches: one takes a stack of the 2x bilinearly upscaled center image disparity prior (D cp) and the RGB center image (I c); and the other is fed with a stack of the 2x bilinearly upscaled output panned view disparity prior (D op) and the generated panned view (I o). The disparity refinement block is a relatively shallow auto-encoder with skip connections and rectangular convolutions as fusion layers in the decoder stage. This allows to increase the receptive field size in the horizontal axis, thus improving the stereo matching performance, as suggested by (b). We configure the output layer of our refinement network with the last layer of Gonzalez & Kim (2019b)'s architecture, which allows to do ambiguity learning in our refinement block. Ambiguity learning allows the network to unsupervisedly account for occlusions and complex or clutered regions that are difficult to minimize in the photometric reconstruction loss (b). We train the refinement block with the loss functions defined in (b) and a new additional loss towards producing the refined disparity maps similar to the primitive disparity maps D cp and D op. The refinement network is encouraged to produce refined center and panned disparity maps D c and D o similar to their primitive counterparts of half the resolution (as they are estimated from the t-net), by minimizing the following primitive disparity loss where D 1/2 c and D are the bilinearly downscaled and refined center and panned disparity maps by a factor of 1/2, respectively. We give a weight of 0.5 to this new loss term. The disparity refinement block can be trained end-to-end along with the monster-net or from a pre-trained monster-net. Instead of relying on naive post-processing approaches like in , which consist on running the disparity estimation twice with normal and horizontally flipped inputs and then taking the average depth, we define a novel special post-processing step (spp) by taking into account the ambiguities in the first and second forward pass. We noted that the ambiguity learning from (b), which we incorporate in our disparity refinement block, can be used to blend the ing disparities from the first and the second forward pass such that only the best disparity estimation (or ambiguity free) from each forward pass is kept on the final post-processed disparity. Figure 11 depicts our novel post-processing step, which consist on running the forward pass of our monster-net with disparity refinement block with P a = 153 and P a = −153, for the first and the second pass respectively. Then, the generated ambiguity masks of each forward pass are concatenated to create a two-channel tensor and passed through a softmax operation along the channel axis. The ing soft-maxed ambiguities are used to linearly combine the disparity maps of each forward pass. As can be observed in Figure 11, the soft-maxed ambiguity mask effectively segment the best disparity estimation from each forward pass. Figure 12 shows the primitive disparity map, the subsequent refinement step, the naive post-processing (pp) and our novel post-processing (spp). The KITTI dataset is a very large collection of mid-resolution 370x1226 stereo images taken from a driving perspective. We used the KITTI split as suggested in , which consists of 29,000 stereo pairs from 33 different scenes of the KITTI2012 dataset. We set apart the KITTI2015 dataset for validation as it contains 400 images excluded from the KITTI split. Additionally, the KITTI2015 contains sparse disparity ground truths (GTs) which are obtained from LIDAR and then are refined by car CAD models. We use these GTs to evaluate the quality of the estimated disparity that can be extracted from the long wing of the t-kernel. CityScapes is a higher resolution stereo dataset with 24,500 stereo pairs that we extract from the train, test and train extra directories for training. The val directory is left for validation with 500 stereo pairs. We pre-process the CityScapes dataset for faster and more robust training. We first remove the top 25, bottom 200 To our best knowledge, our Deep 3D Pan pipeline is the first method designed to be trained on multiple baseline datasets at the same time for the stereoscopic view synthesis problem and the unsupervised monocular depth estimation task. It should be noted that the work of only handled the supervised monocular depth estimation task for multiple datasets with different camera intrinsics utilizing "CAM-Convs", which is a simpler problem than our unsupervised problem. While they require to know the intrinsic matrix for each dataset, along with added computational complexity to perform the "CAM-Convs", our method only requires to know the dataset baseline (distance between cameras). To train on multiple datasets, the only required step is to multiply the P a by the relative baseline with respect to a reference dataset. For instance, the baseline in the KITTI dataset is about 54cm, and, as mentioned before, we have set this baseline to correspond to P K a = 153. Then for the CityScapes dataset, whose baseline is 22cm, its pan amount will be given by P For the training of all models, we used the Adam optimizer with the recommended β's (0.5 and 0.999) for the regression task with a batch size of 8 for 50 epochs. The initial learning rate was set to 0.0001 and was halved at epochs 30 and 40. The following data augmentations on-the-fly were performed: Random resize with a factor between 0.5 and 1 conditioned by the subsequent 256x512 random crop; random horizontal flip, random gamma, and random color and RGB brightness. It was observed that vertical flip has made the learning more difficult, thus it was avoided. When training our model (the monster-net), the training images were sampled with a 50% chance for rightward (P a > 0) or leftward (P a < 0) panning. Additionally, random resolution degradation was applied to the input only by down-scaling followed by up-scaling back to its original size using a bicubic kernel with a scaling factor between 1 and 1/3 while keeping the target view unchanged. Random resolution degradation has improved our by making the network more sensitive to edges and forcing it to focus more on structures and less on textures. Similar "tricks" have been used in previous works in the form of adding noise to the input of discriminators (Sønderby et al., 2017;) to make them invariant to noise, and more sensible to structures. When training our monster-net, either the left or the right view can be used as the input during training. When the left-view is used as the center input view I c, the pan amount P a is set to 153 and the ground truth I gt is set to be the right image. In the opposite case, when the center view is the right image, P a is set to -153 and the left-view is set as the GT. For our monster-net, the "t-shaped" kernel was set to have short wings of 16 elements and a long wing of 32 elements plus one center element T c. For the Deep3D, the 1D kernel is set to have 33 elements and 49 elements for the Deep3D-B variant. For the SepConv and the SepConv-D cases, we set the horizontal and vertical 1D kernels to have 1x51 and 51x1 shape, respectively, as in . We train our monster-net with a combination of l 1 loss and perceptual loss . The later measures the distance between the generated view (I o or I t o) and the ground truth (I gt) images in the deep feature space of a pre-trained network for image classification. The perceptual loss is especially good to penalize deformations, textures and lack of sharpness. The mean square error of the output of the first three max-pooling layers from the pre-trained V GG19 , denoted by φ l, was utilized as the perceptual loss function. To balance the contributions of the l 1 and perceptual losses, a constant α p = 0.01 was introduced. This combination of loss terms was applied to both the low-resolution panned image I t o and super-resolved panned image I o to yield the total loss function L pan as follows: where I 1/2 gt is the bilinearly downscaled version of the ground truth by a factor of 1/2. Visualizations on the CittyScapes (CS) datasets for our monster-net trained on KITTI (K) and on KITT + CityScapes (K+CS) are depicted in Figure 13. The First row of Figure 13 shows the synthesized images from the Deep3D baseline, it can be noted that it over-fits to the training baseline of the KITTI dataset, performing very poorly on CityScapes. The subsequent rows show the for our monster-net when trained without and with the CityScapes dataset. Our models generate very good structures and sharp panned views as depicted the red highlighted regions on Figure 13 for both cases of with and without CityScapes training. When trained on KITTI-only, our method generalizes very well on the CityScapes dataset, with a performance improvement of 3dB over the Deep3D baseline as shown in Table 1. When trained on K+CS, we obtain an additional improvement of 4dB against the KITTY-only trained monster-net. Additionally, we present for our monster-net trained with and without perceptual loss, (L 1) and (l 1 + l p) respectively, on the CityScapes dataset. Sharper with clear edges and structures are achieved when utilizing the perceptual loss, as depicted in the highlighted areas in Figure 13. A network that is trained on outdoor datasets only is not able to generalize well on highly homogeneous areas that are common in the indoors datasets but rare in the outdoor scenes. Visualization of the synthetic views generated for the VICLAB STEREO (VL) indoors dataset is provided in Figure 14. We compare the of our network trained on K+CS versus those of our monster-net trained on K+CS+VL. The latter achieves better generalization, as depicted in Figure 14, with marginal performance decrease in the KITTI dataset (-0.09dB) and considerable quality improvement on the VICLAB STEREO dataset (+1.86dB), as showed in the last row of Table 1. A.7 3D PANNING BEYOND THE BASELINE As our method allows for arbitrary camera panning, it is possible to perform 3D pan beyond the baseline as depicted in Figure 15, where the pan amount was set to go 30% beyond the baseline for both leftward and rightward camera panning, that is P a = −200 and P a = 200 for the per-scene top and bottom samples respectively, where the input image for both pan amounts was set to be the left-view. It is observed that the monster-net with adaptive dilations generates naturally looking new views with consistent structures and without discontinuities even at beyond training baseline 3D panning. We demonstrate the contribution of our design choices in this section. Our main contribution is the adaptive "t-shaped" kernel equipped with globally and locally adaptive dilations. Figure 16 -(a) shows the effect of adaptive dilations in comparison with the fixed dilation. As can be observed, the ing synthesized image by a fixed local dilation kernel shows unnaturally looking regions with low contrast, discontinuities and blurs. Unlike any previous work, our pipeline can greatly benefit from training on multiple datasets at the same time, as shown in Figure 16 -(b). That is, our method greatly benefits from training on two datasets (KITTI and CityScapes) as it exploits the baseline information via its global adaptive dilation property. Training on both KITTI and CityScapes contributes to improved geometry reconstruction as the network is exposed to a wide variety of objects at many different resolutions where, in addition to applying random resizing during training, the resolutions and baselines of these two datasets are very different. Figure 16 -(c) shows the effect of utilizing our sr-block. Even if the quality of the panned image I t o is good in terms of structure, the sharpness is further improved by the addition of the super resolution block. Finally, we analyze the effect of the perceptual loss. By utilizing the perceptual loss, our monter-net is able to better synthesize rich textures and complex or thin structures, as depicted in Figure 16 -(d), even though the PSNR and SSIM are slightly lower as shown in Table 1. The last is known as the "perception-distortion which suggests that better synthetic looking images not always yield higher PSNR/SSIM. Our monster-net with refinement block beats the current state-of-the-art methods for unsupervised monocular depth estimation in terms of prediction accuracy for the KITTI2015 dataset. As shown in Table 3, the primitive disparity D p, that can be extracted from the longer wing of the "t-shaped" kernel, is already among the best performing unsupervised methods. When we add the refinement block with ambiguity learning, our model outperform those of the state-of-the-art. Furthermore, we get an additional improvement in the a 1 accuracy metric when we add our novel specialpost-processing (spp) step. The qualitative comparison against previous methods and the ground truth disparity is shown in Figure 17. It is noted that our monster-net with disparity refinement and special-post-processing generates very reliable disparity maps even on thin structures and image borders. Additionally, our method benefits from very good detection of far away objects Table 3: Disparity estimation performance on the KITTI2015 metrics from . Models are trained with (V) video, (S) stereo, semi global matching (SMG) or GT depth (Supp), and can take stereo inputs (s), nine consecutive input frames during testing and training (9-view), or one input frame during testing and nine consecutive views as supervision during training (1/9-view). Additionally, (st) stands for student, (t) for teacher, (pp) for post-processing and (spp) for special post-processing. The best performing models in terms of a 1 threshold, which is the percentage of disparity values with a relative error less than 0.25, are highlighted in bold.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1gF56VYPH
Novel architecture for stereoscopic view synthesis at arbitrary camera shifts utilizing adaptive t-shaped kernels with adaptive dilations.
Deep Neutral Networks(DNNs) require huge GPU memory when training on modern image/video databases. Unfortunately, the GPU memory as a hardware resource is always finite, which limits the image resolution, batch size, and learning rate that could be used for better DNN performance. In this paper, we propose a novel training approach, called Re-forwarding, that substantially reduces memory usage in training. Our approach automatically finds a subset of vertices in a DNN computation graph, and stores tensors only at these vertices during the first forward. During backward, extra local forwards (called the Re-forwarding process) are conducted to compute the missing tensors between the subset of vertices. The total memory cost becomes the sum of the memory cost at the subset of vertices and the maximum memory cost among local re-forwards. Re-forwarding trades training time overheads for memory and does not compromise any performance in testing. We propose theories and algorithms that achieve the optimal memory solutions for DNNs with either linear or arbitrary computation graphs. Experiments show that Re-forwarding cuts down up-to 80% of training memory on popular DNNs such as Alexnet, VGG, ResNet, Densenet and Inception net. The standard DNN training process consists of two alternated stages: forward and backward. FIG0 (a) illustrates an example of feed-forward neural networks. In the forward stage, the network takes an input tensor, [BatchSize × Channel × W idth × Height], and computes the tensors at each layer until producing the output. In the backward stage, difference between the output and ground truth is passed back along the network to compute the gradients at each layer. The regular training approach saves tensors at all layers during forward, because they are all needed to compute gradients during backward. The total memory cost is the sum of cost over all layers. In popular backbone DNNs for feature extraction of images, such as AlexNet BID13 ), VGG BID22 ) and ResNet BID10 ), the memory cost increases quadratically with the input image resolution and network depth. For example, given an median size input tensor of, ResNet101 requires around 5000 MB. In more challenging tasks, DNNs that detect small objects and large number of object categories require input image resolution of more than 600 × 600 BID18; BID23; BID17 ). The memory issue is worse for video-based DNNs, such as CDC BID21 ), C3D BID12 ) and 3D-ResNet BID9 ). To model complex activities in video, the input tensor may contain 64 frames. Moreover, DNN training takes much more memory than testing. In order to train DNNs with large databases and big learning rate, the batch size can be up to 64. In training DNN compositions, such as Generative adversarial networks (GANs), multiple generator and discriminator networks are simultaneously stored in GPU memory. Existing efforts to address memory issues presented three main approaches: Better single GPUs. Recent GPUs provide larger memory at the expense of exponentially growing price and power consumption. For instance, from TitanXp, Quadro P6000 to Tesla V100, for 1-2.7 times increase in memory, the prices increase 2.8-8.5 times. Parallelization among multiple GPUs BID8; BID20;; BID15 BID16; BID27; BID2; BID1 ), which requires expensive The regular approach saves all tensors during forward, and uses these tensors to compute gradients during backward. (b) Reforwarding (our) saves a subset of tensors during the first forward, and conducts "Re-forward" to compute tensors for gradients during backward.clusters, introduces substantial I/O cost, and does not reduce the total memory cost. Low-level heuristic techniques. Optimization of computation graphs BID3 ), which merges inplace operations into non-inplace operations to cut down memory. Liveness analysis BID3 ), which dynamically recycles garbage tensors in training epochs. These approaches are specific to certain DNN structures, data and tasks. To address above issues, we propose a fundamental approach that explores trade-off between memory and computation power of GPUs. Note that recent affordable GPUs, although limited in memory (12GB), provide exceptional improvement in GPU cores and FLOPS. Trading computational time for memory is a very attractive solution that make it possible to train very heavy DNNs with finite GPU memory. Our approach only saves tensors at a subset of layers during the first forward, and conduct only extra local forwards to compute the missing tensors needed during backward. We call the extra forward process as Re-forwarding. The total memory cost is the sum of the cost at the subset of layers and the maximum memory cost among local re-forwards. Training with Reforwarding, see FIG0 (b), leads to substantial memory reduction. We propose sophisticate theories and efficient algorithms that achieve the optimal memory solution of arbitrary computation graphs. To alleviate the memory pressure from a single GPU processor, many researchers utilized the wellestablished techniques for distributed computation BID8; BID20 BID15 BID16 BID27; BID2; BID1 ). These techniques distribute memory pressure to possibly infinite GPUs or server clusters, but do not reduce the total memory cost of DNNs. Other researchers reduced the memory on finite hardware by optimizing computation graph of DNN and performing liveness analysis. The computation graph of DNNs describes the dependencies of tensors among layers. Liveness analysis recycles garbage to manage memory. These ideas were originated from compiler optimization BID3 ) and has been widely adopted by deep learning frameworks: Theano BID4; BID5 ), MXNet BID6 ), Tensorflow BID0 ) and CNTK BID26 ). Some other techniques efficiently swap data between CPU and GPU BID25; BID19 ). These techniques usually cost extra I/O time and still do not actually reduce the total memory cost. The closest work to our approach, Chen et al. ), uses the gradient checkpoints (similar to the subset of layers in Re-forwarding). However, ) only worked on linear computation graph via a heuristic algorithm. Our approach generates optimal solutions for both linear and arbitrary computation graphs. Our algorithm reduces training memory by manipulating high-level tensors, therefore is generalizable to any DNNs and their compositions. All previous techniques are compatible to our approach and can further improve the memory efficiency of DNN training. Denote a computation graph as G = E, V. E = {e i} and V = {v i} are the edges and vertices in the computation graph, respectively. In deep neural networks, the vertices represent the tensors and the edges represent operations. Denote function l(·) as a measure of memory cost. V R is the subset of vertices saved during the first forward. l(v i) is defined as the memory cost of storing vertex v i. For two adjacent vertices v i and v j in set V R, the memory cost during re-forwarding from v i to v j is defined as l(v i, v j) = j−1 t=i+1 l(v t), which is the sum of cost over all the vertices between v i and v j. Using these notations, the memory cost of training with re-forwarding is formulated as DISPLAYFORM0 where the first term is the sum of the memory cost of all the stored tensors, and the second term is the maximal cost among the re-forwards. For easy illustration, we start by formulating Re-forwarding on Linear Computation Graphs (LCG) (FIG1). For LCGs, Eqn. 1 can be solved in two cases. Case LCG with Non-identical Vertex Cost: When the assumption of identical cost does not hold, the solution to Eqn. 1 does not have an analytic form. Denote the maximal Re-forward cost max j l(v j, v j+1) as a constant C, and the solution to Eqn. 1 is reduced to solving for min DISPLAYFORM1 Set the maximal term as l(v i, v j) Construct Accessibility Graph 4:Find the shortest path in the Accessibility Graph as the solution 5:Compute the actual total cost of the solution 6:Save the solution if it's better. Suppose the actual max term of this solution is B, and l(v i, v j) = C, skip the loops where DISPLAYFORM0 All the Re-forward costs in an optimal solution satisfy the constraint l(v j, v j+1) ≤ C. We solve Eqn. 1 by constructing a new graph, called Accessibility Graph DISPLAYFORM1 DISPLAYFORM2 is equivalent to finding the shortest path from the source vertex and the target vertex in the Accessibility Graph. Notice that in the optimal solution, the max term equal the one maximal term among all l(v i, v i+1) terms. To traverse all possible max terms, we can simply compute the loss of every vertex pair and use it as a possible max term. Given a max term C, suppose the actual max term of the solution under C is B and B < C. It's obvious that for all the max terms B ≤ max < C, the solution would be the same solution. Therefore, these max terms can be skipped. Algorithm 1 summarizes the process for searching an optimal solution for LCG. Suppose there are N vertices in the computation graph, the time complexity of Algorithm 1 is O(N 4) 1. As generalization of DNNs with LCG, we present theory 2 and algorithms for DNNs with Arbitrary Computation Graphs (ACG), in particular the acyclic directed graphs FIG1 ). The optimal solution of Re-forwarding corresponds to an optimal division of ACG, such that memory cost (Eqn. 1) is minimum. We denote that an ACG is divided into end-to-end segments by a set of vertices. These end-to-end segments can have multiple endpoint vertices, for example, multiple source vertices and multiple target vertices. In this paper, as an assumption and also for simplification, these end-to-end segments are narrowed down to those with only one source vertex and one target vertex. Another assumption in the case of ACG is imposed on the operation that has multiple inputs: one can compute the gradients of output with respect to the gradients of inputs without using the current value of inputs. Examples of operations that meet this assumption are: concatenation (the gradient of output is also the concatenation of the gradient of input), add (the gradient of output equals the gradient of input), etc. An example that breaks this assumption is multiplication (the gradient of input depends on the input). Fortunately, most of the popular networks meet this assumption. A simple way to remove this assumption is to store all the input tensors of this multi-input operation. However, this is not modeled by our loss function and may lead to sub-optimal solution. In summary, there are only two assumptions in our approach: the segment in a solution only has two endpoints (source and target). the multi-input operation can compute the gradients of output without using the current value of input. Under these two assumptions, our approach is optimal for ACGs.. Definition 1. Closed Set: A set s containing vertices and edges is a closed set if and only if it satisfies the following three properties: 1. All the vertices of s have a common ancestor v i and a common descendent v j; 2. Denote the vertex subset of s as V, edge subset as E, and the set of edges between two arbitrary vertices of V ∪ {v i, v j} is E, the edge from v i to v j (if exists) as e ij. E must either be E or E − {e ij}; 3. An arbitrary v 1 ∈ V doesn't have edge with another arbitrary v 2 / ∈ V ∪ {v i, v j}. For multiple valid closed sets between v i and v j, we denote the largest one as DISPLAYFORM0 In the definition of Closed Set, property 1 corresponds to the two endpoint assumption in section 4.1 where the two endpoints become v i and v j in the definition. Property 2 confines the edge subsets of 1 More detailed complexity analysis is in the appendix due to space limitation 2 All proofs are in the appendix due to space limitation.s to be one of two cases: E or E − {e ij}. Both cases are valid although they have different edges. Property 3 guarantees the independence of such a set s, meaning that the vertices within s have no connections with other vertices outside s ∪ {v i, v j}. As there might be multiple valid closed sets between v i and v j, which corresponds to the Branched Closed Set in Definition 5, we denote the largest closed set between v i and v j as s ij and denote smaller closed set with an extra superscript, such as s 1 ij. Definition 3. Splitting Vertex: A vertex v t ∈ s ij is a splitting vertex of s ij if and only if s it exists, s tj exists and s ij = s it ∪ s tj ∪ {v t} and s it ∩ s tj = ∅ Definition 4. Splittable Closed Set (Type 1): closed set with at least 1 splitting vertex. The definition of Splitting Vertex is to describe whether a closed set can be divided into two linearly arranged closed set. A closed set is splittable if it has at least 1 splitting vertex and is defined as Closed Set Type 1. Definition 5. Branched Closed Set (Type 2): A closed set is branched if it has 0 splitting vertex and can be divided into branches: s ij = s DISPLAYFORM1 Definition 8. Division of Closed Set: For type 1, its division is the linear segments separated by all its splitting vertices; for type 2, its division is all its branches, any of which cannot be divided into more branches; for type 3, its division is its maximal split. For closed set type 1, it can be divided into linearly arranged segments. For closed set type 2, it can be divided into branches. So here we investigate the division of closed set type 3. As we don't want trivial division, for example, division that is formed by every edge in the closed set, we define Maximal Split to describe the split such that each member of the split is as large as possible. An example of maximal split is shown in FIG4. In the definition of maximal split, the term maximal is implied by saying that any subset of this split cannot be combined into a single closed set. If it can, then the maximal split will be formed by this larger closed set and all the rest of the previous split. For closed set type 3, we use its maximal split as its division. Definition 9. Division Tree: Division tree is a representation of a computation graph, where the root node is the whole computation graph, the leaf nodes are all the single tensors in the computation graph, and for a non-leaf node, its children is the members of its division. With the division of 3 types of closed sets, the computation graph can be reorganized into a division tree (Figure 5) where a non-leaf node would be a closed set and its children would be its corresponding division. The root node is the whole computation graph, the largest closed set, and the leaf nodes would be single tensors in the computation graph. With division tree, we can apply divide-and-conquer to search for optimal solution. Figure 5: In this tree, the root node is the whole computation graph. All the leaf nodes are single tensors. Every other node except root and leaves is a member of the division of its parent. Theorem 1. The division tree of a computation graph is unique and complete. The uniqueness of the division tree indicates that the optimal solution of the division tree would also be the optimal solution of the whole computation graph. The completeness indicates that the division tree has included all the possible members of solution and represents the whole search space for the optimal solution. Theorem 1 is proved in the appendix. We search optimal solutions for ACGs by solving several sub-problems using Algorithm 2-4 respectively. Based on these components, we present our final solver as Algorithm 5.Algorithm 2 judges whether a vertex is a splitting vertex of a closed set. This algorithm mainly follows the Definition 3 and uses vertex set to check the property of a splitting vertex. With this algorithm, we can judge whether a closed set is type 1 and get its division if it is. Suppose there are N vertices in s ij, the time complexity of Algorithm 2 is O(N 2). Algorithm 3 examines whether a closed set is branched. It uses a growing algorithm to check whether an independent subpart of this closed set can form a closed set. If a non-trivial closed set s ij has an edge from v i to v j, then it's branched because this edge itself can be treated as a closed set. Combined with Algorithm 2, we can know the type of a closed set and get its division if it's type 2. Suppose there are N vertices in s ij, the time complexity of Algorithm 3 is O(N 2).Algorithm 4 addresses the problem of finding the maximal split, the division of a closed set type 3 s ij. First get all the possible closed sets within s ij and use a property of maximal split to judge whether this closed set is a member of the maximal split. The property is: there cannot exist another closed set s ab s ij but contains any member of this maximal split. This property is proved in Lemma 6 of the appendix. Suppose there are N vertices in s ij, the time complexity of Algorithm 4 is O(N 4).Algorithm 5 is the solver for ACGs. First, the division tree of the computation graph is built. Similar to the linear solver, a max term list is formed by the cost of all the possible closed sets for traverse. Given a max term, we propose a greedy idea: for a closed set, never expand it unless the its cost exceed the max term. In other word, if the max term doesn't allow a leap over this closed set, we expand it, otherwise, do not expand it. Because once expanded, some cost of other vertices inside this closed set might be introduced, and the cost will never be smaller than unexpanded. If some children of the closed set type 1 are expanded, the rest reforms a few linear segments and still can be solved by the linear solver. If some children of the closed set type 2 or 3 are expanded, the other Initialize a vertex set s = {v k}. v k ∈ s ij is a randomly chosen vertex. while True do 7:For any v t ∈ s ij, v t ∈ s that has connection to any v k ∈ s, add v t to s. if No more vertex can be added to s then 9:Break 10:if s = {v ∈ s ij} then Return false Algorithm 4 Find the maximal split of a non-branched s ij with 0 splitting vertex DISPLAYFORM0 For all the vertices {v} that have paths from v k and have paths to v t. if ∃v 2 ∈ {v} and v 2 = v k, v t, v 2 has connection to a v 1 ∈ {v} then 4:Form a closed set s kt with all these vertices. 5: for each formed closed set s kt do If there doesn't exist a s ab such that s kt s ab s ij, put s kt into the maximal split. For all the children that have cost larger than current max term. Expand them and solve the next level. All the expanded children have separated the current closed set to linear segments. Solve all the linear segments with current max term. For all the children that have cost larger than current max term. Expand them and solve the next level. All the other members remain unexpanded. Summarize the total loss, save the current solution if it's better. We evaluated Re-forwarding on two main groups of neural networks networks with linear structures, such as Alexnet BID13 ) and vgg series BID22 ). networks with non-linear structures, such as Resnet series BID10 ), Densenet series BID11 ) and Inception net BID24 ). For each network in TAB4, an computation graph is built such that every vertex is a Float32 tensor, every edge is an operation, and the memory cost of a vertex is its tensor size (measured in MB). We compared Re-forwarding with Chen ) and the regular training approach. Note that only worked on linear computation graphs. To compare with ) on non-linear networks, we manually re-organized all the non-linear computation graphs into linear computation graphs with their splitting vertices, and fed them to (see TAB4 (MB)"). Our Re-forwarding approach directly works on arbitrary computation graphs. We have also included a customized network ("CustomNet"), on which even the manual version of Chen's approach is not applicable. Our approach directly works on all networks. The computation graph of this network is visualized in the appendix. All experiments were conducted in Pytorch. [BatchSize, 3, 224, 224]. We also measure the training time (time of 1 training iteration) for the regular approach and our approach. Each time is measured as the average of 20 iterations. Our approach has the same training time as Chen's approach and its manual version, see "Space Efficient Training Time" in TAB4. Table. 1 shows that Re-forwarding cuts down huge amount of memory from the regular approach at reasonable time overheads: 26% space off and 40% time overhead for Alexnet, around 40% space off and 40% time overhead for Vgg series. For Resnet series, the deeper network, the more memory was cut down. On the deepest Resnet152, 80% space off was achieved with only 39% time overhead. For Densenet series, more than 80% space off was achieved with around 40% time overhead. Notice that, only works on linear networks. Its on non-linear networks were manually synthesized. Re-forwarding directly works on non-linear networks and constantly outperformed and its "manual" version. This supports our claim that Re-forwarding is optimal. Re-forwarding is a fundamental approach that explores trade-off between memory and computation power of GPUs. By saving tensors at a subset of layers during forward, and conducting extra local forwards for backward, Re-forwarding makes it possible to train very heavy DNNs with finite GPU memory. To our knowledge, our theoretical and algorithmic are the first top-down work that achieve an optimal memory solution for arbitrary computation graphs in DNNs. Re-forwarding can be further embedded and optimized with any low-level techniques such as distributed computing, GPU/CPU swapping, computation graph optimization and liveness analysis. Same on v q, v q must be v j or v t. As s ⊂ [s ij), ∀v 1 ∈ s, v 1 has no edge with v 2 ∈ [s ij). As s kj is close, ∀v 1 ∈ s, v 1 has no edge with v 2 ∈ s kj. ∀v 1 ∈ s, v 1 can only have edge with v 2 ∈ [s]. Thus the independence of s is guaranteed. Therefore, s is closed set, v k is the splitting vertex of s ij. DISPLAYFORM0 Same on v j, v j is the splitting vertex of s kt Lemma 4. If s ij has n splitting vertices {v 1, v 2, ..., v n}, then s ij = s i1 ∪ s 12 ∪... ∪ s nj ∪ {v 1, v 2, ..., v n} Proof. If n = 2, the splitting vertices are DISPLAYFORM1 According to Lemma 3, v 1 is splitting vertex of s i2 and v 2 is splitting vertex of s 1j. Therefore, DISPLAYFORM2 For n > 2, the lemma can be proved by repetitively using the in n = 2. Lemma 6. Any member of a maximal split can not be the subset of another closed set s s ij.Proof. Suppose the source vertex of s is v 1 and target vertex is v 2, a member s xy of the maximal split is inside s. Suppose a member s ab of the maximal split has its source vertex v a inside s and target vertex v b outside s. Then the boundary vertex (the vertex that has edges to the non-overlapping parts of both sets) must be v 2, otherwise the independence of s will be violated. Notice that v 2 is inside s ab and the independence of s ab needs to be guaranteed, for ∀v p ∈ s, v p / ∈ s ∩ s ab, v q ∈ s ∩ s ab, v p has no edge with v q. Therefore, v a is a splitting vertex of s. Similarly, if s ba has its target vertex v a inside s and source vertex v b outside s, the boundary vertex must be v 1 and v a is a splitting vertex of s. For the closed set s, from the discussion above, we know that there are at most 2 members of the maximal split that can overlap with s. Other members must be either completely inside s or completely outside s. Let's discuss the number of members that overlaps with s. If there are 0 member that overlaps with s, s is the union of a subset of members of the maximal split, which violates the definition of maximal split. If there is 1 member that overlaps with s, suppose the corresponding splitting vertex is v b, and the boundary vertex is actually v 2. Then s 1b is a closed set containing s xy and corresponds to the situation of 0 member overlapping. s 1b is the union of a subset of members of the maximal split, and violates the definition of maximal split. If there are 2 members that overlaps with s, suppose they generate two different splitting vertex v a and v b. Then s ab is a closed set containing s xy and corresponds to the situation of 0 member overlapping. s ab is the union of a subset of members of the maximal split, and violates the definition of maximal split. If they generate the same splitting vertex v b, from lemma 5, v b is also the endpoint vertex of at least 1 other member s ab which has to be inside s. Suppose the two overlapping members are s cb that contains v 1, and s bd that contains v 2. As the source vertex of s, v 1 has path to v b and v 1 has path to v a, which implies v b has path to v a. As the target vertex of s, v 2 has path from v b and v 2 has path from v a, which implies v b has path from v a. This conflicts with the fact that s is acyclic. Therefore, this case is not possible. Therefore, this lemma is proved. Lemma 7. If non-branched s ij has at least 1 vertex but has 0 splitting vertex, then its maximal split has length > 2 Proof. As s ij is not branched, the members of its maximal split cannot have the starting vertex as v i and the ending vertex as v j at the same time. If s ij has at least 1 vertex, and its maximal split has length 2, then its maximal split must be {[s ik], [s kj]}, and v k will be the splitting vertex of s ij, which violates that s ij has no splitting vertex. If s ij has at least 1 vertex without splitting vertex, it has at least 2 edges and cannot have a trivial length 1 maximal split. Therefore, its maximal split has length > 2 To prove this uniqueness, we simply discuss the division uniqueness of closed set type 1, 2 and 3. Proof. By the definition of this division and Lemma 4, the uniqueness of the division is equivalent to the uniqueness of the splitting vertex set of a closed set type 1. The splitting vertex set is obviously unique. Proof. If there exists another division, there must be a branch member s Proof. As the closed set in the division tree has at least 1 vertex, with Lemma 7, we know that the division, i.e. maximal split of a closed set type 3 s ij within the division tree will have length > 2. Denote this maximal split as {[s pq]}, we only need to prove this maximal split is unique. In all the cases, there cannot exist another different maximal split. Therefore, the maximal split is unique. Similar with the uniqueness, the completeness of division tree is equivalent to the completeness of the division of a closed set. To prove this completeness, we simply discuss the division completeness of closed set type 1, 2 and 3.An equivalent statement of the division completeness is: there doesn't exist a closed set whose head is in one member of the division and whose tail is in another member of the division. Proof. Suppose there exists a closed set s whose head v p is in one member s 1 and whose tail v q is in another member s 2.If v p is not an endpoint of s 1, then according to Lemma 3, v p is also a splitting vertex in s 1 and can break s 1 into smaller segments, which makes v p also the splitting vertex of the whole closed set. However, v p is not the splitting vertex of the whole closed set s ij. This also applies to v q. Therefore, the division of closed set type 1 is complete. Proof. Suppose there exists a closed set s whose head v p is in one member s 1 and whose tail v q is in another member s 2. Same with closed set type 2, the boundary vertex v has to be the endpoint vertex of s 1 or the independence of s 1 will be violated. According to Lemma 5, v is the endpoint vertex of at least 3 members, meaning that v will at least have 1 connection with another closed set s 3. To maintain the independence of s, s has to include s 3 as well. However, s 3 also has its endpoints. This will propagate until s becomes the whole closed set. Therefore, there cannot exist such a closed set s. The division of closed set type 3 is complete. DISPLAYFORM0. These steps will cost O(N 2). Then we traverse each (v i, v j) pair to form the edges of the accessibility graph, which also cost O(N 2). Solving the shortest path problem in accessibility graph will also cost O(N 2) as the accessibility graph has N vertices. Therefore, the overall time complexity of Algorithm 1 would be O(N 4).The space complexity would be O(N 2) for the table of l(v i, v j) and the accessibility graph itself. Suppose there are N vertices in the closed set s ij. In step 1, getting {v in} and {v out} will cost O(N) time for traversing the ancestors and descendents of v t. In our implementation, an array a of length N is used to represent {v in} and {v out}: a i = 1 indicates v i ∈ {v in}, a i = 2 indicates v i ∈ {v out} and a i = 0 indicates v i / ∈ {v in} ∪ {v out}. Then the union check and intersection check in step 2 can be done in O(N). The connection check in step 2 traverses the edges and costs O(N 2). Other steps are O. Therefore, the overall time complexity of Algorithm 2 would be O(N 2).The space complexity would be O(N) for the array to represent {v in} and {v out}. Suppose there are N vertices in the closed set s ij. The most time consuming part will be from step 5 to step 13. Other steps are O. In step 5 to step 13, every edge between two vertices in s ij is at most visited once and there are O(N 2) edges. Therefore, the overall time complexity of Algorithm 3 would be O(N 2).In our implementation, an array of length N is used to represent the vertex set s = {v k}. Therefore, the space complexity would be O(N). Suppose there are N vertices in the closed set s ij and there are O(N 2) vertex pairs. For each vertex pair, the connection check in step 2-4 will cost O(N 2), similar to the connection check in Algorithm 2. Thus step 1-4 will cost O(N 4). In our implementation, for each vertex in the closed set s ij, we select the largest formed closed set s kt that contains this vertex. The closed set number is then reduced to O(N) and step 5-6 can be done in O(N 3). Therefore, the overall time complexity of Algorithm 4 would be O(N 4)As O(N 2) closed sets can be formed in step 1-4 and each closed set is a smaller DAG with O(N) vertices and cost O(N 2) space, the space complexity would be O(N 4) for all these closed sets. B.5 ALGORITHM 5Step 1 is similar to step 1-4 in Algorithm 4 with s ij being the whole computation graph. Therefore, the overall time complexity for step 1 is O(N 4).In step 2, the complexity of building division tree is related to the complexity of getting the division of a closed set. For closed set type 1, Algorithm 2 is called for each vertex to get all splitting vertices. Thus getting the division of closed set type 1 cost O(N 3) time. For closed set type 2, Algorithm 3 is used to solve for its division and costs O(N 2) time. For type 3, Algorithm 4 is called to solve for its division. Notice that we have already stored all possible closed sets in step 1, step 1-4 in Algorithm 4 can be skipped and thus the time complexity of getting the division of closed set type 3 is reduced to O(N 3). Therefore, getting the division of an arbitrary closed set costs O(N 3) time. In depth i of the division tree, suppose there are k closed sets, and the number of vertices of jth closed sets is a j. To build depth i + 1 of the division tree, we need to get the division of all these closed sets, which will cost j O(a DISPLAYFORM0 As the depth of division tree is at most N, the overall time complexity of step 2 would be O(N 4).For step 3-10, if the computation graph is linear, the ACG solver will reduce to LCG solver and has complexity O(N 4). If the computation graph is non-linear, the length of {m} would be O(N 2) for there are O(N 2) vertex pair. For a max term m, from step 4-10, the actual time costing part will be step 6 which calls the LCG solver, and other steps would be O. Suppose the LCG solver is called k times, solving problems of a 1, a 2,..., a k vertices. The total complexity of this would be O(a Step 1 would cost O(N 4) space to store all the possible closed sets. Step 2 would cost O(N 2) space for the division tree. Step 3-10 would cost O(N 2) space for calling LCG solver. In , the overall time complexity of Algorithm 5 is O(N 4) and the overall space complexity of Algorithm 5 is O(N 4). C RUNTIME AND THEORETICAL ANALYSISThe number of vertices in the computation graph and the runtime of ACG Solver (Algorithm 5) for each network are listed in TAB10. All the runtimes were measured on a single core of CPU i7-8700.Notice that the runtime is measured on only 1 cpu core, it can be massively reduced by parallelization on multiple cpu cores. The runtime can also be further reduced through a better implementation as our implementation is a prototype. Although it might be concerning that the runtime is too much for some deep networks, it is still relatively small compared to training processes which might cost days or even weeks. More importantly, solving the optimal solution for a network is an one-time effort. The optimal solutions for all popular networks will be released online for people to use without taking the time to run ACG solver. To see how well the reality matches with the theory, we also compare the measured memory cut off and theoretical memory cut off (given by Algorithm 5) in TAB10. Observe that all the measured memory cut off are slightly lower than theoretical memory cut off. This is because, in implementation, we assume that the whole input tensors of each operation are always stored for backward. In reality, some operations only need to store small tensors for backward. For example, batchnormalization only needs a few statistics for backward and doesn't need the whole input tensor. We visualize the computation graph of Alexnet, vgg11, vgg13, vgg16, vgg19 and CustomNet and the solution of our approach (in green) and the solution of Chen's approach (in red). In the computation graphs, the cost of each vertex and the actual operation of each edge are also marked. The cost of each vertex is the size of this tensor during forward given the input as ( for inception v3). For example, in Alexnet, the input is and thus the source vertex has the cost 150528 = 1 × 3 × 224 × 224. After 2D convolution and relu, the tensor becomes and thus the second vertex has the cost 193600 = 1×64×55×55.
[ 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJMvBjC5YQ
This paper proposes fundamental theory and optimal algorithms for DNN training, which reduce up to 80% of training memory for popular DNNs.
Compression is a key step to deploy large neural networks on resource-constrained platforms. As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight. In this paper, we study the representation power of quantized neural networks. First, we prove the universal approximability of quantized ReLU networks on a wide class of functions. Then we provide upper bounds on the number of weights and the memory size for a given approximation error bound and the bit-width of weights for function-independent and function-dependent structures. Our reveal that, to attain an approximation error bound of $\epsilon$, the number of weights needed by a quantized network is no more than $\mathcal{O}\left(\log^5(1/\epsilon)\right)$ times that of an unquantized network. This overhead is of much lower order than the lower bound of the number of weights needed for the error bound, supporting the empirical success of various quantization techniques. To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks. Various deep neural networks deliver state-of-the-art performance on many tasks such as object recognition and natural language processing using new learning strategies and architectures BID11;; ). Their prevalence has extended to embedded or mobile devices for edge intelligence, where security, reliability or latency constraints refrain the networks from running on servers or in clouds. However, large network sizes with the associated expensive computation and memory consumption make edge intelligence even more challenging BID2 ).In response, as will be more detailed in Section 2, substantial effort has been made to reduce the memory consumption of neural networks while minimizing the accuracy loss. The memory consumption of neural networks can be reduced by either directly reducing the number of weights or decreasing the number of bits (bit-width) needed to represent and store each weight, which can be employed on top of each other BID3. The number of weights can be reduced by pruning BID9, weight sparsifying , structured sparsity learning BID14 and low rank approximation BID5. The bit-width is reduced by quantization that maps data to a smaller set of distinct levels . Note that while quantization may stand for linear quantization only (; BID7 or nonlinear quantization only BID8 BID3 in different works, our discussion will cover both cases. However, as of today quantization is still only empirically shown to be robust and effective to compress various neural network architectures (; BID20 BID22 . Its theoretical foundation still remains mostly missing. Specifically, many important questions remain unanswered. For example:• Why even binarized networks, those most extremely quantized with bit-width down to one, still work well in some cases?• To what extent will quantization decrease the expressive power of a network? Alternatively, what is the overhead induced by weight quantization in order to maintain the same accuracy?In this paper, we provide some insights into these questions from a theoretical perspective. We focus on ReLU networks, which is among the most widely used in deep neural networks BID15 . We follow the idea from BID16 to prove the complexity bound by constructing a network, but with new and additional construction components essential for quantized networks. Specifically, given the number of distinct weight values λ and a target function f, we construct a network that can approximate f with an arbitrarily small error bound to prove the universal approximability. The memory size of this network then naturally serves as an upper bound for the minimal network size. The high-level idea of our approach is to replace basic units in an unquantized network with quantized sub-networks 1 that approximate these basic units. For example, we can approximate a connection with any weight in an unquantized network by a quantized sub-network that only uses a finite number of given weight values. Even though the approximation of a single unit can be made arbitrarily accurate in principle with unlimited resources (such as increased network depth), in practice, there exists some inevitable residual error at every approximation, all of which could propagate throughout the entire network. The challenge becomes, however, how to mathematically prove that we can still achieve the end-to-end arbitrary small error bound even if these unavoidable residual errors caused by quantization can be propagated throughout the entire network. This paper finds a solution to solve the above challenge. In doing so, we have to propose a number of new ideas to solve related challenges, including judiciously choosing the proper finite weight values, constructing the approximation sub-networks as efficient as possible (to have a tight upper bound), and striking a good balance among the complexities of different approximation steps. Based on the bounds derived, we compare them with the available on unquantized neural networks and discuss its implications. In particular, the main contributions of this paper include:• We prove that even the most extremely quantized ReLU networks using two distinct weight values are capable of representing a wide class of functions with arbitrary accuracy.• Given the number of distinct weights and the desired approximation error bound, we provide upper bounds on the number of weights and the memory size. We further show that our upper bounds have good tightness by comparing them with the lower bound of unquantized ReLU networks established in the literature.• We show that, to attain the same approximation error bound, the number of weights needed by a quantized network is no more than O log 5 (1/) times that of an unquantized network. This overhead is of much lower order compared with even the lower bound of the number of weights needed for the error bound. This partially explains why many state-ofthe-art quantization schemes work well in practice.• We demonstrate how a theoretical complexity bound can be used to estimate an optimal bit-width, which in turn enables the best cost-effectiveness for a given task. The remainder of the paper is organized as follows. Section 2 reviews related works. Section 3 lays down the models and assumptions of our analysis. We prove the universal approximability and the upper bounds with function-independent structure in Section 4 and extend it to function-dependent structure in Section 5. We analyze the bound-based optimal bit-width in Section 6. Finally, Section 7 discusses the and gets back to the questions raised above. Quantized Neural Networks: There are rich literatures on how to obtain quantized networks, either by linear quantization or nonlinear quantization BID19; ). Linear quantization does mapping with a same distance between contiguous quantization levels and is usually implemented by storing weights as fixed-point numbers with reduced bit-width (; BID7 . Nonlinear quantization maps the data to quantization levels that are not uniformly distributed and can be either preselected or learned from training. Then the weights are stored using lossless binary coding (the index to a lookup table) instead of the actual values BID8 BID3. It is reported that a pruned AlexNet can be quantized to eight bits and five bits in convolutional layers and fully connected layers, respectively, without any loss of accuracy. Similar are also observed in LENET-300-100, LENET-5, and VGG-16 (a). One may argue that some of these benchmark networks are known to have redundancy. However, recent works show that quantization works well even on networks that are designed to be extremely small and compact. SqueezeNet, which is a state-of-the-art compact network, can be quantized to 8-bit while preserving the original accuracy BID7 ). There are some representative works that can achieve little accuracy loss on ImageNet classification even using binary or ternary weights BID4; BID14 BID21. More aggressively, some works also reduce the precision of activations, e.g. (; ; BID6 . Although the classification accuracy loss can be minimized, the universal approximation property is apparently lost, as with limited output precision the network cannot achieve arbitrary accuracy. Accordingly, we do not include them in the discussion of this paper. The limit of quantization is still unknown while the state-of-the-art keeps getting updated. For example, VGG-16 is quantized to 3-bit while maintaining the original accuracy . Motivated by the great empirical success, the training of quantized neural networks has been analyzed theoretically, but not the network capacity (; BID3 .Universal Approximability and Complexity Bounds: The universal approximability of ReLU networks is proved in and revisited in ( Throughout this paper, we define ReLU networks as feedforward neural networks with the ReLU activation function σ(x) = max(0, x). The ReLU network considered includes multiple input units, a number of hidden units, and one output unit. Without loss of generality, each unit can only connect to units in the next layer. Our on ReLU networks can be extended to any other networks that use piecewise linear activation functions with finite breakpoints such as leaky ReLU and ReLU-6 immediately, as one can replace a ReLU network by an equivalent one using these activation functions while only increasing the number of units and weights by constant factors BID16.We denote the finite number of distinct weight values as λ (λ ∈ Z + and λ ≥ 2), for both linear quantization and nonlinear quantization. For linear quantization, without loss of generality, we assume the finite number of distinct weight values are given as {−1, λ} are uniformly spaced (hence called "linear") in and −1 is used to obtain the negative weight values. For nonlinear quantization, we assume the finite number of distinct weight values are not constrained to any specific values, i.e., they can take any values as needed. To store each weight, we only need log(λ) 2 bits to encode the index, i.e. the bit-width is log(λ). The overhead to store sparse structures can be ignored because it varies depending on the implementation and can be easily reduced to the same order as the weight storage using techniques such as compressed sparse row (CSR) for nonlinear quantization. The number of bits needed to store the codebook can also be ignored because it has lower order of complexity. We consider any function f in the Sobolev space: DISPLAYFORM0 The space W n,∞ consists of all locally integrable function f: DISPLAYFORM1, where |n| ≤ n and Ω is an open set in R d. We denote this function space as F d,n in this paper. Note that we only assume weak derivatives up to order n exist where n can be as small as 1 where the function is non-differentiable. We also only assume the Lipschitz constant to be no greater than 1 for the simplicity of the derivation. When the Lipschitz constant is bigger than 1, as long as it is bounded, the whole flow of the proof remains the same though the bound expression will scale accordingly. When constructing the network to approximate any target function f, we consider two scenarios for deriving the bounds. The first scenario is called function-dependent structure, where the constructed network topology and their associated weights are all affected by the choice of the target function. In contrast, the second scenario is called function-independent structure, where the constructed network topology is independent of the choice of the target function in f ∈ F d,n with a given. The principle behind these design choices (the network topology constructions and the choice of weights) is to achieve a tight upper bound as much as possible. One might consider that we can transform an unquantized network within the error bound to a quantized one in a straightforward way by approximating every continuous-value weight with a combination of discrete weights with arbitrary accuracy. However, the complexity of such approximation (number of discrete weights needed) depends on the distribution of those continuous-value weights (e.g., their min and max), which may vary depending on the training data and network structure and a closed-form expression for the upper bounds is not possible. As such, a more elegant approach is needed. Below we will establish a constructive approach which allows us to bound the approximation analytically. We start our analysis with function-independent structure, where the network topology is fixed for any f ∈ F d,n and a given. We first present the approximation of some basic functions by subnetworks in Section 4.1. We then present the sub-network that approximates any weight in Section 4.2, and finally the approximation of general functions and our main are in Section 4.3. Proposition 1. Denote the design parameter that determines the approximation error bound as r. The proof and the details of the sub-network constructed are included in Appendix A.1. Once the approximation to squaring function is obtained, we get Proposition 2 by the fact that 2xy = (x + y) 2 − x 2 − y 2. Proposition 2. Denote the design parameter that determines the approximation error bound as r. Given x ∈ [−1, 1], y ∈ [−1, 1], and only two weight values 1 2 and − 1 2, there is a ReLU sub-network with two input units that implements a function ×: R 2 → R, such that (i) if x = 0 or y = 0, then × (x, y) = 0; (ii) for any x, y, the error × = | × (x, y) − xy| ≤ 6 · 2 −2(r+1); (iii) the depth is O (r); (iv) the width is a constant; (v) the number of weights is O (r).Proof. Build three sub-networks f r s as described in Proposition 1 and let DISPLAYFORM0 Then the statement (i) is followed by property (i) of Proposition 1. Using the error bound in Proposition 1 and Equation, we get the error bound of ×: DISPLAYFORM1 Since a sub-network B abs that computes σ(x) + σ(−x) can be constructed to get the absolute value of x trivially, we can construct × (x, y) as a linear combination of three parallel f r s and feed them with Proposition 3. Denote the design parameter that determines the approximation error bound as t. A connection with any weight w ∈ [−1, 1] can be approximated by a ReLU sub-network that has only λ ≥ 2 distinct weights, such that (i) the sub-network is equivalent to a connection with weight w while the approximation error is bounded by 2 −t i.e., |w − w| < 2 −t; (ii) the depth is O λt DISPLAYFORM0 Proof. Consider that we need a weight w to feed the input x to a unit in the next layer as wx. With a limited number of distinct weight values, we can construct the weight we need by cascade and combination. For clarity, we first consider w ≥ 0 and x ≥ 0, and relax these assumptions later. The connections with w = 0 can be seen as an empty sub-network while w = 1 can be easily implemented by 4 units with weight 1 2. Now we show how to represent all integral multiples of 2 −t from 2 −t to 1 − 2 −t, which will lead to the statement (i) by choosing the nearest one from w as w. Without loss of generality, we assume t 1 λ−1 is an integer. We use λ weights that include − 1 2 and W: DISPLAYFORM1 We first construct all w from W c which is defined as DISPLAYFORM2 Similar to a numeral system with radix equal to t 1 λ−1, any w i ∈ W c can be obtained by concatenating weights from W while every weights in W is used no greater than t 1 λ−1 − 1 times. After that, all integral multiples of 2 −t from 2 −t to 1−2 −t can be represented by a binary expansion on W c. Note that connections in the last layer for binary expansion use weight 1 2, thus additional 2 −1 is multiplied to scale the resolution from 2 −(t−1) to 2 −t. Since for any weight in W c we need to concatenate no more than λ t 1 λ−1 − 1 weights in a straight line, the sub-network has no greater than λ t 1 λ−1 − 1 + 1 layers, and no greater than 4tλ t 1 λ−1 − 1 + 8t + 4 weights. We now relax the assumption w ≥ 0. When w < 0, the sub-network can be constructed as w = |w|, while we use − 1 2 instead of 1 2 in the last layer. To relax the assumption x ≥ 0, we can make a duplication of the sub-network. Let all the weights in the first layer of the sub-network be 1 2 for one and − 1 2 for the other. Here we are utilizing the gate property of ReLU. In this way, one sub-network is activated only when x > 0 and the other is activated only when x < 0. The sign of the output can be adjusted by flipping the sign of weights in the last layer. Note that the configuration of the sub-network is solely determined by w and works for any input x. The efficiency of the weight approximation is critical to the overall complexity. Compared with the weight selection as {2 −1, 2 DISPLAYFORM3 λ−1}, our approximation reduces the number of weights by a factor of t λ−2 λ−1. With the help of Proposition 2 and Proposition 3, we are able to prove the upper bound for general functions. Theorem 1. For any f ∈ F d,n, given λ distinct weights, there is a ReLU network with fixed structure that can approximate f with any error ∈, such that (i) the depth is DISPLAYFORM0 the number of bits needed to store the network is O λ log (λ) log DISPLAYFORM1 The complete proof and the network constructed can be found in Appendix A.2. We first approximate f by f 2 using the Taylor polynomial of order n − 1 and prove the approximation error bound. Note that even when f is non-differentiable (only first order weak derivative exists), the Taylor polynomial of order 0 at x = m N can still be used, which takes the form of P m = f (m N). Then we approximate f 2 by a ReLU network that is denoted as f with bounded error. After that, we present the quantized ReLU network that implements f and the complexity. The discussion above focuses on nonlinear quantization which is a more general case compared to linear quantization. For linear quantization, which strictly determines the available weight values once λ is given, we can use the same proof for nonlinear quantization except for a different subnetwork for weight approximation with width t and depth t log λ +1. Here we give the theorem and the proof is included in Appendix A.3. Theorem 2. For any f ∈ F d,n, given weight maximum precision 1 λ, there is a ReLU network with fixed structure that can approximate f with any error ∈, such that (i) the depth is O (log (1/)); (ii) the number of weights is O log (1/) + DISPLAYFORM2; (iii) the number of bits needed to store the network is O log(λ) log (1/) + log DISPLAYFORM3 The network complexity can be reduced if the network topology can be set according to a specific target function, i.e. function-dependent structure. In this section, we provide an upper bound for function-dependent structure when d = 1 and n = 1, which is asymptotically better than that of a fixed structure. Specifically, we first define an approximation to f (x) as f (x) that has special properties to match the peculiarity of quantized networks. Then we use piecewise linear interpolation and "cached" functions BID16 to approximate f (x) by a ReLU network. While simply using piecewise linear interpolation at the scale of can satisfy the error bound with O (1/) weights, the complexity can be reduced by first doing interpolation at a coarser scale and then fill the details in the intervals to make the error go down to. By assigning a "cached" function to every interval depending on specific function and proper scaling, the number of weights is reduced to O log −1 (1/) 1/ when there is no constraint on weight values BID16.The key difficulty in applying this approach to quantized ReLU networks is that the required linear interpolation at i T exactly where i = 1, 2, · · ·, T is not feasible because of the constraint on weight selection. To this end, we transform f (x) to f (x) such that the approximation error is bounded; the Lipschitz constant is preserved; f i T are reachable for the network under the constraints of weight selection without increasing the requirement on weight precision. Then we can apply the interpolation and cached function method on f (x) and finally approximate f (x) with a quantized ReLU network. Formally, we get the following proposition and the proof can be found in Appendix A.4.Proposition 4. For any f ∈ F 1,1, t ∈ Z +, and T ∈ Z +, there exists a function f (x) such that DISPLAYFORM0 With the help of Proposition 4 and the weight construction method described in Section 4.2, we are able to apply the interpolation and cached function approach. Denoting the output of the network as f (x), we have |f (x)−f (x)| = |f (x)− f (x)|+| f (x)−f (x)| ≤ by choosing appropriate hyperparameters which are detailed in Appendix A.5 and the network complexity is obtained accordingly. Theorem 3. For any f ∈ F 1,1, given λ distinct weights, there is a ReLU network with function-dependent structure that can approximate f with any error ∈, such that (i) the depth is O λ (log log (1/)) 1 λ−1 + log (1/); (ii) the number of weights is O λ (log log (1/)) 1 λ−1 +1 + (1/) (iii) the number of bits needed to store the network is O log λ λ (log log (1/)) DISPLAYFORM0 Using the different weight construction approach as in the case of function-independent structure, we have the for linear quantization: Theorem 4. For any f ∈ F 1,1, given weight maximum precision 1 λ, there is a ReLU network with function-dependent structure that can approximate f with any error ∈, such that (i) the depth is O (log (1/)); (ii) the number of weights is O (1/); (iii) the number of bits needed to store the network is O (log(λ)/ ). In this section, we first introduce the optimal bit-width problem and then show how a theoretical bound could potentially be used to estimate the optimal bit-width of a neural network. Because of the natural need and desire of comparison with competitive approaches, most quantization techniques are evaluated on some popular reference networks, without modification of the network topology. On the one hand, the advancement of lossless quantization almost stalls at a bit-width between two and six BID8 BID3; BID0; BID6. A specific bit-width depends on the compactness of the reference network and the difficulty of the task. On the other hand, the design space, especially the different combinations of topology and bit-width, is largely underexplored because of the complexity, ing in sub-optimal . A recent work by empirically validates the benefit of exploring flexible network topology during quantization. That work adds a simple variable of network expanding ratio, and shows that a bit-width of four achieves the best cost-accuracy trade-off among limited options in {1, 2, 4, 8, 16, 32}. Some recent effort on using reinforcement learning to optimize the network hyper-parameters BID12 could potentially be used to address this issue. But the current design space is still limited to a single variable per layer (such as the pruning ratio based on a reference network). How to estimate an optimal bit-width for a target task without training could be an interesting research direction in the future. The memory bound expression as derived in this paper helps us to determine whether there is an optimal λ that would lead to the lowest bound and most compact network (which can be translated to computation cost in a fully connected structure) for a given target function. For example, by dropping the lower-order term and ignoring the rounding operator, our memory bound can be simplified as DISPLAYFORM0 where θ 1 is a constant determined by, n, and d. We can find an optimal λ that minimizes M (λ): DISPLAYFORM1 As is detailed in Appendix B, we prove that there exists one and only one local minimum (hence global minimum) in the range of [2, ∞) whenever < 1 2. We also show that λ opt is determined by log 3n2 d /, which can be easily dominated by d. Based on such , we quantitatively evaluate the derivative of M (λ), and based on which the optimal bit-width log(λ opt) under various settings in FIG3 and FIG3, respectively. In FIG3, we also mark the input dimension of a few image data sets. It is apparent to see that the optimal bit width derived from M (λ) is dominated by d and lies between one and four for a wide range of input size. This observation is consistent with most existing empirical research , hence showing the potential power of our theoretical bound derivation. Since the bounds are derived for fully connected networks and depend on the construction approach, the interesting proximity between log(λ opt) and the empirical cannot be viewed as a strict theoretical explanation. Regardless, we show that the complexity bound may be a viable approach DISPLAYFORM2 is a positive monotonically increasing function and thus does not affect the trends too much. Note that λ is the number of distinct weight values and thus log(λ) is the corresponding bit-width. It can be seen that and n only affect log(λ opt) when d is small (< 10 2). We also mark the input dimension d of various image data set and their corresponding log(λ opt). It shows that the optimal bit-width increases very slowly with d.to understand the optimal bit-width problem, thus potentially accelerating the hyper-parameter optimization of deep neural networks. We defer such a thorough investigation of the optimal bit-width or optimal hybrid bit-width configuration across the network to our future work. In this section, we further discuss the bound of nonlinear quantization with a function-independent structure as the generality of nonlinear quantization. The availability of unquantized functionindependent structures in literature also makes it an excellent reference for comparison. Comparison with the Upper Bound: The quality of an upper bound lies on its tightness. Compared with the most recent work on unquantized ReLU networks BID16, where the upper bound on the number of weights to attain an approximation error is given by O log(1/) (1/) d n, our for a quantized ReLU network is given by O λ log DISPLAYFORM0, which translates to an increase by a factor of λ log 1 λ−1 (1/). Loosely speaking, this term reflects the loss of expressive power because of weight quantization, which decreases quickly as λ increases. We also compare our bound with the lower bound of the number of weights needed to attain an error bound of to have a better understanding on the tightness of the bound. We use the lower bound for unquantized ReLU networks from BID16, as it is also a natural lower bound for quantized ReLU networks. Under the same growth rate of depth, the lower bound is given by Ω(log −3 (1/) (1/) d/n ), while our upper bound is, within a polylog factor when λ is a constant, O(λ log DISPLAYFORM0 The comparison validates the good tightness of our upper bound. The Upper Bound of Overhead: More importantly, the above comparison yields an upper bound on the possible overhead induced by quantization. By comparing the expressions of two bounds while treating λ as a constant, we can show that, to attain the same approximation error bound, the number of weights needed by a quantized ReLU network is no more than O(log 5 (1/)) times that needed by an unquantized ReLU network. Note that this factor is of much lower order than the lower bound Ω(log −3 (1/) (1/) d/n ). This little overhead introduced by weight quantization explains in part the empirical success on network compression and acceleration by quantization and also answers in part the questions as raised in Section 1. Given the significant benefits of quantization in term of memory and computation efficiency, we anticipate that the use of quantization networks will continue to grow, especially on resource-limited platforms. Future Work: There remain many other avenues for future investigation. For example, although we derived the first upper bound of quantized neural networks, the lower bound is still missing. If a tight lower bound of the network size is established, it could be combined with the upper bound to give a much better estimation of required resources and the optimal bit-width. We believe the trends associated with the bounds can also be useful and deserve some further investigation. For example, the trend may help hardware designers in their early stage of design exploration without the need of lengthy training. While we assume a uniform bit-width across all layers, another area of research is to allow different bit-widths in different layers, which could achieve better efficiency and potentially provide theoretical justifications on the emerging trend of hybrid quantization BID17 DISPLAYFORM1 DISPLAYFORM2 where DISPLAYFORM3 is the i-th iterate of g(x). Since g(x) can be implemented by a ReLU sub-network DISPLAYFORM4 •r (x) can be obtained by concatenating such implementation of g(x) for r times. Now, to implement f r s (x) based on g•r (x), all we need are weights {2 −2, 2 −4, · · ·, 2 −2(r−1), 2 −2r }, which can be easily constructed with additional 2r layers and the weight 1 2. Note that a straightforward implementation will have to scale g •i (x) separately (multiply by different numbers of 1 2) before subtracting them from x because each g•i (x) have a different coefficient. Then the width of the network will be Θ(r). Here we use a "pre-scale" method to reduce the network width from Θ(r) to a constant. The network constructed is shown in FIG6. The one-layer sub-network that implements g(x) and the one-layer sub-network that scales the input by 4 are denoted as B g and B m respectively. Some units are copied to compensate the scaling caused by •i (x) are scaled by 2 2(r−i) respectively. As a , we obtain 2 2r x − r i=1 2 2(r−i) g •i (x) after the last B m. Then it is scaled by 2 −2r in the later 2r layers to get f r s (x). In this way, we make all g •i (x) sharing the same scaling link and a constant width can be achieved. A.2 THE PROOF OF THEOREM 1 Theorem 1. For any f ∈ F d,n, given λ distinct weights, there is a ReLU network with fixed structure that can approximate f with any error ∈, such that (i) the depth is DISPLAYFORM5 the number of bits needed to store the network is O λ log (λ) log DISPLAYFORM6 Proof. The proof is composed of four steps. We first approximate f by f 2 using the Taylor polynomial of order n − 1 and prove the approximation error bound. Note that even when f is nondifferentiable (only first order weak derivative exist), the Taylor polynomial of order 0 at x = m N can still be used, which takes the form of P m = f (m N). Then we approximate f 2 by a ReLU network that is denoted as f with bounded error. After that, we present the quantized ReLU network that implements the network f and the complexity of the network. We use a partition of unity on DISPLAYFORM7, and h(x) is defined as follows: DISPLAYFORM8 where N is a constant and DISPLAYFORM9 Note that supp ψ m ⊂ {x : DISPLAYFORM10 For all m, we have the order n − 1 Taylor polynomial for the function f at x = m N as DISPLAYFORM11 To get a more realizable approximation for quantized networks, we define P m (x) = n−|n|. Then we get an approximation to f using P m and ψ m as f 2 m∈{0,···,N} d ψ m P m. Then the approximation error of f 2 is bounded by Equation. DISPLAYFORM12 The second step follows ψ m (x) = 0 when x / ∈ suppψ m. In the third step we turn the sum to multiplication, because for any x there are up to 2 d terms ψ m (x) that are not equal to zero. The fourth step uses a Lagrange's form of the Taylor remainder. The fifth step follows different round precision of β m,n in different order and the fact that the number of terms with order i is not greater than d i.We rewrite f 2 as DISPLAYFORM13 where DISPLAYFORM14 Note that β m,n is a constant and thus f 2 is a linear combination of at most d n (N + 1) d terms of f m,n (x). Note that when d = 1, the number of terms should be n(N + 1) d instead; but for simplicity of presentation we loosely use the same expression as they are on the same order. We define an approximation to f m,n (x) as f m,n (x). The only difference between f m,n (x) and f m,n (x) is that all multiplication operations are approximated by × as discussed in Proposition 2. Consider that if we construct our function × with | × (x, y) − xy| < × = 2 −2(r+1), then DISPLAYFORM15 Applying Equation FORMULA0 to |f m,n (x) − f m,n (x)| repeatedly, we bound it to Equation FORMULA0. DISPLAYFORM16 Finally, we define our approximation to f (x) as f (x): DISPLAYFORM17 Using Equation FORMULA0, we get the error bound of the approximation to f 2 (x) as in Equation FORMULA0. DISPLAYFORM18 The second line follows again the support property and statement (i) of Proposition 2. The third line uses the bound |β m,n | ≤ 1. The fourth line is obtained by inserting Equation.Then the final approximation error bound is as follows: DISPLAYFORM19 Using statement (ii) of Proposition 2 and choosing r as r = log(6N n (d+n−1)) 2 − 1, the approximation error turns to DISPLAYFORM20 Figure 3: A qunatized ReLU network that implements f (x). The connections of all B fmn are the same. Every connection from B N to other blocks has no greater than two weights. DISPLAYFORM21 Therefore, for any f ∈ F d,n and ∈, there is a ReLU network f that approximate f with error bound if we choose DISPLAYFORM22 We now present the construction of the network for f (x). If every f m,n (x) can be computed by a sub-network, then f (x) is simply a weighted sum of all outputs of f m,n (x). By Proposition 3, we can implement the needed weights β m,n by choosing t = log. As discussed in Proposition 3, we build a weight construction network B w in the way that all integral multiples of the minimal precision can be obtained. Therefore, all mi N can be obtained in the same way as β m,n, except that we need to concatenate two weight construction sub-networks. Now we analyze the complexity of the network. The implementation of f (x) is shown in Figure 3. The function and size of blocks are listed in TAB2. Then we are able to obtain the complexity of the network. While we can write the complexity of the network in an explicit expression, here we use the O notation for clarity. Let N d, N w, N b be the depth, the number of weights, and the number of bits required respectively. The weight construction blocks B w have the highest order of number of weights and we have N w = O λt DISPLAYFORM23 Inserting t = log A.3 THE PROOF OF THEOREM 2Theorem 2. For any f ∈ F d,n, given weight maximum precision 1 λ, there is a ReLU network with fixed structure that can approximate f with any error ∈, such that (i) the depth is O (log (1/)); (ii) the number of weights is O log (1/) + DISPLAYFORM24; (iii) the number of bits needed to store the network is O log(λ) log (1/) + log 2 (1/) (1/) DISPLAYFORM25 With λ distinct values, a linearly quantized network has a minimal resolution of 1 λ. The proof for the approximability of linear quantization can be done in the same way as Theorem 1 except for a different sub-network for weight approximation. We still construct W c in Proposition 3 first and any weight value from W c can be obtained by multiply at most t log λ weights. Thus the width and depth of the weight approximation network will be t and t log λ + 1 respectively. Updating the B w in TAB2, we obtain the complexity accordingly. DISPLAYFORM26 This contradicts the fact that f (T by Equation. Before the intersection, if f < f +, DISPLAYFORM27, apply the same logic and we obtain 0 ≤ f DISPLAYFORM28 T. This implies statement (iii) and concludes the proof. Theorem 3. For any f ∈ F 1,1, given λ distinct weights, there is a ReLU network with function-dependent structure that can approximate f with any error ∈, such that (i) the depth is O λ (log log (1/)) 1 λ−1 + log (1/); (ii) the number of weights is O λ (log log (1/)) 1 λ−1 +1 + (1/) (iii) the number of bits needed to store the network is O log λ λ (log log (1/)) 1 λ−1 +1 + (1/).Proof. We first transform f to f with Proposition 4. Then we apply the interpolation and cached function method from while using the weight construction method described in Proposition 3. Denoting the output of the network as f (x), we have |f (The approximation network is shown in FIG15 . The sizes of blocks are given in TAB3 where f T is the uniform linear interpolation function of f with T − 1 breakpoints, f * is the sum of the selected cached functions, Φ(x) is a filtering function. The inputs connections to B f * 1 and the connections inside B m have higher order to the number of weights than others. Then the complexity can be obtained accordingly. Theorem 4. For any f ∈ F 1,1, given weight maximum precision 1 λ, there is a ReLU network with function-dependent structure that can approximate f with any error ∈, such that (i) the
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJe9rh0cFX
This paper proves the universal approximability of quantized ReLU neural networks and puts forward the complexity bound given arbitrary error.
Reinforcement learning (RL) with value-based methods (e.g., Q-learning) has shown success in a variety of domains such as games and recommender systems (RSs). When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient. However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization (max-Q) problem. While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation. Alternatively, when the Q-function is parameterized with a generic feed-forward neural network (NN), the max-Q problem can be NP-hard. In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers. In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming (MIP)---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution. To speed up training of CAQL, we develop three techniques, namely (i) dynamic tolerance, (ii) dual filtering, and (iii) clustering. To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy. To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments. Reinforcement learning (RL) has shown success in a variety of domains such as games and recommender systems (RSs) . When the action space is finite, valuebased algorithms such as Q-learning , which implicitly finds a policy by learning the optimal value function, are often very efficient because action optimization can be done by exhaustive enumeration. By contrast, in problems with a continuous action spaces (e.g., robotics ), policy-based algorithms, such as policy gradient (PG) or cross-entropy policy search (CEPS) , which directly learn a return-maximizing policy, have proven more practical. Recently, methods such as ensemble critic and entropy regularization have been developed to improve the performance of policy-based RL algorithms. Policy-based approaches require a reasonable choice of policy parameterization. In some continuous control problems, Gaussian distributions over actions conditioned on some state representation is used. However, in applications such as RSs, where actions often take the form of high-dimensional item-feature vectors, policies cannot typically be modeled by common action distributions. Furthermore, the admissible action set in RL is constrained in practice, for example, when actions must lie within a specific range for safety . In RSs, the admissible actions are often random functions of the state . In such cases, it is non-trivial to define policy parameterizations that handle such factors. On the other hand, value-based algorithms are wellsuited to these settings, providing potential advantage over policy methods. Moreover, at least with linear function approximation , under reasonable assumptions, Q-learning converges to optimality, while such optimality guarantees for non-convex policy-based methods are generally limited . Empirical also suggest that value-based methods are more data-efficient and less sensitive to hyper-parameters . Of course, with large action spaces, exhaustive action enumeration in value-based algorithms can be expensive--one solution is to represent actions with continuous features . The main challenge in applying value-based algorithms to continuous-action domains is selecting optimal actions (both at training and inference time). Previous work in this direction falls into three broad categories. The first solves the inner maximization of the (optimal) Bellman residual loss using global nonlinear optimizers, such as the cross-entropy method (CEM) for QT-Opt , gradient ascent (GA) for actor-expert , and action discretization (; ;). However, these approaches do not guarantee optimality. The second approach restricts the Q-function parameterization so that the optimization problem is tractable. For instance, wire-fitting approximates Q-values piecewise-linearly over a discrete set of points, chosen to ensure the maximum action is one of the extreme points. The normalized advantage function (NAF) constructs the state-action advantage function to be quadratic, hence analytically solvable. Parameterizing the Q-function with an input-convex neural network ensures it is concave. These restricted functional forms, however, may degrade performance if the domain does not conform to the imposed structure. The third category replaces optimal Q-values with a "soft" counterpart : an entropy regularizer ensures that both the optimal Q-function and policy have closed-form solutions. However, the sub-optimality gap of this soft policy scales with the interval and dimensionality of the action space . Motivated by the shortcomings of prior approaches, we propose Continuous Action Q-learning (CAQL), a Q-learning framework for continuous actions in which the Q-function is modeled by a generic feed-forward neural network. 1 Our contribution is three-fold. First, we develop the CAQL framework, which minimizes the Bellman residual in Q-learning using one of several "plug-andplay" action optimizers. We show that "max-Q" optimization, when the Q-function is approximated by a deep ReLU network, can be formulated as a mixed-integer program (MIP) that solves max-Q optimally. When the Q-function has sufficient representation power, MIP-based optimization induces better policies and is more robust than methods (e.g., CEM, GA) that approximate the max-Q solution. Second, to improve CAQL's practicality for larger-scale applications, we develop three speed-up techniques for computing max-Q values: (i) dynamic tolerance; (ii) dual filtering; and (iii) clustering. Third, we compare CAQL with several state-of-the-art RL algorithms on several benchmark problems with varying degrees of action constraints. Value-based CAQL is generally competitive, and outperforms policy-based methods in heavily constrained environments, sometimes significantly. We also study the effects of our speed-ups through ablation analysis. We consider an infinite-horizon, discounted Markov decision process with states X, (continuous) action space A, reward function R, transition kernel P, initial state distribution β and discount factor γ ∈, all having the usual meaning. A (stationary, Markovian) policy π specifies a distribution π(·|x) over actions to be taken at state x. Let ∆ be the set of such policies. The expected cumulative return of π ∈ ∆ is J(π):= E[∞ t=0 γ t r t | P, R, x 0 ∼ β, π]. An optimal policy π * satisfies π * ∈ arg max π∈∆ J(π). The Bellman operator F [Q](x, a) = R(x, a) + γ x ∈X P (x |x, a) max a ∈A Q(x, a) over state-action value function Q has unique fixed point Q * (x, a) , which is the optimal Q-function Q * (x, a) = E [∞ t=0 γ t R(x t, a t) | x 0 = x, a 0 = a, π * ]. An optimal (deterministic) policy π * can be extracted from Q *: π * (a|x) = 1{a = a * (x)}, where a * (x) ∈ arg max a Q * (x, a). For large or continuous state/action spaces, the optimal Q-function can be approximated, e.g., using a deep neural network (DNN) as in DQN . In DQN, the value function Q θ is updated using the value label r + γ max a Q θ target (x, a), where Q θ target is a target Q-function. Instead of training these weights jointly, θ target is updated in a separate iterative fashion using the previous θ for a fixed number of training steps, or by averaging θ target ← τ θ + (1 − τ)θ target for some small momentum weight τ ∈ . DQN is off-policy-the target is valid no matter how the experience was generated (as long as it is sufficiently exploratory). Typically, the loss is minimized over mini-batches B of past data (x, a, r, x) sampled from a large experience replay buffer R . One common loss function for training Q θ * is mean squared Bellman error: 2. Under this loss, RL can be viewed as 2 -regression of Q θ (·, ·) w.r.t. target labels r + γ max a Q θ target (x, a). We augment DQN, using double Q-learning for more stable training , whose loss is: A hinge loss can also be used in Q-learning, and has connections to the linear programming (LP) formulation of the MDP . The optimal Q-network weights can be specified as: To stabilize training, we replace the Q-network of the inner maximization with the target Q-network and the optimal Q-value with the double-Q label, giving (see Appendix A for details): In this work, we assume the Q-function approximation Q θ to be a feed-forward network. Specifically, let Q θ be a K-layer feed-forward NN with state-action input (x, a) (where a lies in a ddimensional real vector space) and hidden layers arranged according to the equations: where (W j, b j) are the multiplicative and bias weights, c is the output weight of the Q-network, are the weights of the Q-network,ẑ j denotes pre-activation values at layer j, and h(·) is the (component-wise) activation function. For simplicity, in the following analysis, we restrict our attention to the case when the activation functions are ReLU's. We also assume that the action space A is a d-dimensional ∞ -ball B ∞ (a, ∆) with some radius ∆ > 0 and center a. Therefore, at any arbitrary state x ∈ X the max-Q problem can be re-written as q * While the above formulation is intuitive, the nonlinear equality constraints in the neural network formulation makes this problem non-convex and NP-hard . Policy-based methods (; ;) have been widely-used to handle continuous actions in RL. However, they suffer from several well-known difficulties, e.g., (i) modeling high-dimensional action distributions, (ii) handling action constraints, and (iii) data-inefficiency. Motivated by earlier work on value-based RL methods, such as QTOpt and actor-expert , we propose Continuous Action Q-learning (CAQL), a general framework for continuous-action value-based RL, in which the Qfunction is parameterized by a NN (Eq. 3). One novelty of CAQL is the formulation of the "max-Q" problem, i.e., the inner maximization in and, as a mixed-integer programming (MIP). The benefit of the MIP formulation is that it guarantees that we find the optimal action (and its true bootstrapped Q-value) when computing target labels (and at inference time). We show empirically that this can induce better performance, especially when the Q-network has sufficient representation power. Moreover, since MIP can readily model linear and combinatorial constraints, it offers considerable flexibility when incorporating complex action constraints in RL. That said, finding the optimal Q-label (e.g., with MIP) is computationally intensive. To alleviate this, we develop several approximation methods to systematically reduce the computational demands of the inner maximization. In Sec. 3.2, we introduce the action function to approximate the arg max-policy at inference time, and in Sec. 4 we propose three techniques, dynamic tolerance, dual filtering, and clustering, to speed up max-Q computation during training. In this section, we illustrate how the max-Q problem, with the Q-function represented by a ReLU network, can be formulated as a MIP, which can be solved using off-the-shelf optimization packages (e.g., SCIP , CPLEX , Gurobi ). In addition, we detail how approximate optimizers, specifically, gradient ascent (GA) and the cross-entropy method (CEM), can trade optimality for speed in max-Q computation within CAQL. A trained feed-forward ReLU network can be modeled as a MIP by formulating the nonlinear activation function at each neuron with binary constraints. Specifically, for a ReLU with pre-activation function of form z = max{0, w x + b}, where and, u ∈ R d are the weights, bias and lower-upper bounds respectively, consider the following set with a binary variable ζ indicating whether the ReLU is active or not:. In this formulation, both M + = max x∈[,u] w x + b and M − = min x∈[,u] w x + b can be computed in linear time in d. We assume M + > 0 and M − < 0, otherwise the function can be replaced by z = 0 or z = w x + b. These constraints ensure that z is the output of the ReLU: If ζ = 0, then they are reduced to z = 0 ≥ w x+b, and if ζ = 1, then they become z = w x+b ≥ 0. This can be extended to the ReLU network in by chaining copies of intermediate ReLU formulations. More precisely, if the ReLU Q-network has m j neurons in layer j ∈ {2, . . ., K}, for any given state x ∈ X, the max-Q problem can be reformulated as the following MIP:,..., K}, i ∈ {1, . . ., m j}, where 1 = a − ∆, u 1 = a + ∆ are the (action) input-bound vectors. Since the output layer of the ReLU NN is linear, the MIP objective is linear as well. Here, W j,i ∈ R mj and b j,i ∈ R are the weights and bias of neuron i in layer j. Furthermore, j, u j are interval bounds for the outputs of the neurons in layer j for j ≥ 2, and computing them can be done via interval arithmetic or other propagation methods from the initial action space bounds (see Appendix C for details). As detailed by , this can be further tightened with additional constraints, and its implementation can be found in the tf.opt package described therein. As long as these bounds are redundant, having these additional box constraints will not affect optimality. We emphasize that the MIP returns provably global optima, unlike GA and CEM. Even when interrupted with stopping conditions such as a time limit, MIP often produces high-quality solutions in practice. In theory, this MIP formulation can be solved in time exponential on the number of ReLUs and polynomial on the input size (e.g., by naively solving an LP for each binary variable assignment). In practice however, a modern MIP solver combines many different techniques to significantly speed up this process, such as branch-and-bound, cutting planes, preprocessing techniques, and primal heuristics . Versions of this MIP model have been used in neural network verification (; ; ; ; ; ;) and analysis , but its application to RL is novel. also proposed a MIP formulation to solve the planning problem with non-linear state transition dynamics model learned with a NN, it is different than ours, which solves the max-Q problem. Gradient Ascent GA is a simple first-order optimization method for finding the (local) optimum of a differentiable objective function, such as a neural network Qfunction. At any state x ∈ X, given a "seed" action a 0, the optimal action arg max a Q θ (x, a) is computed iteratively by a t+1 ← a t + η∇ a Q θ (x, a), where η > 0 is a step size (either a tunable parameter or computed using back-tracking line search ). This process repeats until convergence, |Q θ (x, a t+1) − Q θ (x, a t)| <, or a maximum iteration count is reached. Cross-Entropy Method CEM is a derivative-free optimization algorithm. At any given state x ∈ X, it samples a batch of N actions {a i} N i=1 from A using a fixed distribution (e.g., a Gaussian) and ranks the corresponding Q-values. Using the top K < N actions, it then updates the sampling distribution, e.g., using the sample mean and covariance to update the Gaussian. This is repeated until convergence or a maximum iteration count is reached. In traditional Q-learning, the policy π * is "implemented" by acting greedily w.r.t. the learned Qfunction: π * (x) = arg max a Q θ (x, a). 3 However, computing the optimal action can be expensive in the continuous case, which may be especially problematic at inference time (e.g., when computational power is limited in, say embedded systems, or real-time response is critical). To mitigate the problem, we can use an action function π w: X → A-effectively a trainable actor network-to approximate the greedy-action mapping π *. We train π w using training data B = {(, where q * i is the max-Q label at state x i . Action function learning is then simply a supervised regression problem: 2 . This is similar to the notion of "distilling" an optimal policy from max-Q labels, as in actor-expert . Unlike actorexpert-a separate stochastic policy network is jointly learned with the Q-function to maximize the likelihood with the underlying optimal policy-our method learns a state-action mapping to approximate arg max a Q θ (x, a)-this does not require distribution matching and is generally more stable. The use of action function in CAQL is simply optional to accelerate data collection and inference. In this section, we propose three methods to speed up the computationally-expensive max-Q solution during training: (i) dynamic tolerance, (ii) dual filtering, and (iii) clustering. Dynamic Tolerance Tolerance plays a critical role in the stopping condition of nonlinear optimizers. Intuitively, in the early phase of CAQL, when the Q-function estimate has high Bellman error, it may be wasteful to compute a highly accurate max-Q label when a crude estimate can already guide the gradient of CAQL to minimize the Bellman residual. We can speed up the max-Q solver by dynamically adjusting its tolerance τ > 0 based on (a) the TD-error, which measures the estimation error of the optimal Q-function, and (b) the training step t > 0, which ensures the bias of the gradient (induced by the sub-optimality of max-Q solver) vanishes asymptotically so that CAQL converges to a stationary point. While relating tolerance with the Bellman residual is intuitive, it is impossible to calculate that without knowing the max-Q label. To resolve this circular dependency, notice that the action function π w approximates the optimal policy, i.e., π w (·|x) ≈ arg max a Q θ (x, ·). We therefore replace the optimal policy with the action function in Bellman residual and propose the dynamic tolerance:, where k 1 > 0 and k 2 ∈ are tunable parameters. Under standard assumptions, CAQL with dynamic tolerance {τ t} converges a.s. to a stationary point (Thm. 1, ). The main motivation of dual filtering is to reduce the number of max-Q problems at each CAQL training step. For illustration, consider the formulation of hinge Q-learning in. Denote by q * x,θ target the max-Q label w.r.t. the target Q-network and next state x. The structure of the hinge penalty means the TD-error corresponding to sample (x, a, x, r) is inactive whenever q * x,θ target ≤ (Q θ (x, a) − r)/γ-this data can be discarded. In dual filtering, we efficiently estimate an upper bound on q * x,θ target using some convex relaxation to determine which data can be discarded before max-Q optimization. Specifically, recall that the main source of non-convexity in comes from the equality constraint of the ReLU activation function at each NN layer. Similar to MIP formulation, assume we have component-wise bounds We use this approximation to define the relaxed NN equations, which replace the nonlinear equality constraints in with the convex set H(l, u). We denote the optimal Q-value w.r.t. the relaxed NN asq * x, which is by definition an upper bound on q * x i. Hence, the condition:q * x,θ target ≤ (Q θ (x, a) − r)/γ is a conservative certificate for checking whether the data (x, a, x, r) is inactive. For further speed up, we estimateq * x with its dual upper bound (see Appendix C for derivations)q, where ν is defined by the following recursion "dual" network: ) · 1{s ∈ I j}, and replace the above certificate with an even more conservative one: Although dual filtering is derived for hinge Q-learning, it also applies to the 2 -loss counterpart by replacing the optimal value q One can utilize the inactive samples in the π w -learning problem by replacing the max-Q label q * x,θ with its dual approximation q x,θ. Since q x,θ ≥ q * x,θ, this replacement will not affect optimality. Clustering To reduce the number of max-Q solves further still, we apply online state aggregation , which picks a number of centroids from the batch of next states B as the centers of p-metric balls with radius b > 0, such that the union of these balls form a minimum covering of B. Specifically, at training step t ∈ {0, 1, . . .}, denote by C t (b) ⊆ B the set of next-state centroids. For each next state c ∈ C t (b), we compute the max-Q value q * c,θ target = max a Q θ target (c, a), where a * c is the corresponding optimal action. For all remaining next states x ∈ B \ C t (b), we approximate their max-Q values via first-order Taylor series expansionq x,θ target:= q centroid to x, i.e., c ∈ arg min c ∈Ct(b) x −c p. By the envelope theorem for arbitrary choice sets In this approach the cluster radius r > 0 controls the number of max-Q computations, which trades complexity for accuracy in Bellman residual estimation. This parameter can either be a tuned or adjusted dynamically (similar to dynamic tolerance), e.g., r t = k 3 · k t 4 with hyperparameters k 3 > 0 and k 4 ∈. Analogously, with this exponentially-decaying cluster radius schedule we can argue that the bias of CAQL gradient (induced by max-Q estimation error due to clustering) vanishes asymptotically, and the corresponding Q-function converges to a stationary point. To combine clustering with dual filtering, we define B df as the batch of next states that are inconclusive after dual filtering, i.e., B df = {x ∈ B : q x,θ target > (Q θ (x, a) − r)/γ}. Then instead of applying clustering to B we apply this method onto the refined batch B df. To illustrate the effectiveness of CAQL, we (i) compare several CAQL variants with several state-ofthe-art RL methods on multiple domains, and (ii) assess the trade-off between max-Q computation speed and policy quality via ablation analysis. Comparison with Baseline RL Algorithms We compare CAQL with three baseline methods, DDPG and TD3 -two popular policy-based deep RL algorithms-and NAF , a value-based method using an action-quadratic Q-function. We train CAQL using three different max-Q optimizers, MIP, GA, and CEM. Note that CAQL-CEM counterpart is similar to QT-Opt and CAQL-GA reflects some aspects actor-expert . These CAQL variants allow assessment of the degree to which policy quality is impacted by Q-learning with optimal Bellman residual (using MIP) rather than an approximation (using GA or CEM), at the cost of steeper computation. To match the implementations of the baselines, we use 2 loss when training CAQL. Further ablation analysis on CAQL with 2 loss vs. hinge loss is provided in Appendix E. We evaluate CAQL on one classical control benchmark (Pendulum) and five MuJoCo benchmarks (Hopper, Walker2D, HalfCheetah, Ant, Humanoid). Different than most previous work, we evaluate the RL algorithms on domains not just with default action ranges, but also using smaller, constrained action ranges (see Table 6 in Appendix D for action ranges used in our experiments). 4 The motivation for this is two-fold: (i) To simulate real-world problems , where the restricted ranges represent the safe/constrained action sets; (ii) To validate the hypothesis that action-distribution learning in policy-based methods cannot easily handle such constraints, while CAQL does so, illustrating its flexibility. We reduce episode limits from 1000 to 200 steps and use small networks to accommodate the MIP. Both changes lead to lower returns than that reported in state-of-the-art RL benchmarks . Details on network architectures and hyperparameters are described in Appendix D. Policy performance is evaluated every 1000 training iterations, using a policy with no exploration. Each measurement is an average return over 10 episodes, each generated using a separate random seed. To smooth learning curves, data points are averaged over a sliding window of size 3. Similar to the setting of , CAQL measurements are based on trajectories that are generated by the learned action function instead of the optimal action w.r.t. the Q-function. Table 1 shows the average return of CAQL and the baselines under the best hyperparameter configurations. CAQL significantly outperforms NAF on most benchmarks, as well as DDPG and TD3 on 10 of 14 benchmarks. Of all the CAQL policies, those trained using MIP are among the best performers in all the benchmarks except Ant [-0.25, 0 .25] and Humanoid [-0.25, 0.25]. This verifies our conjecture about CAQL: Q-learning with optimal Bellman residual (using MIP) performs better than using approximation (using GA, CEM) when the Q-function has sufficient representation power (which is more likely in low-dimensional tasks). Moreover, CAQL-MIP policies have slightly lower variance than those trained with GA and CEM on most benchmarks. Table 2 shows summary statistics of the returns of CAQL and the baselines on all 320 configurations (32 hyperparameter combinations × 10 random seeds) and illustrates the sensitivity to hyperparameters of each method. CAQL is least sensitive in 13 of 14 tasks, and policies trained using MIP optimization, specifically, are best in 8 of 14 tasks. This corroborates the hypothesis that value- Table 2: The mean ± standard deviation of (95-percentile) final returns over all 320 configurations (32 hyper parameter combinations×10 random seeds). The full training curves are given in Figure 4 in Appendix E. CAQL-MIP policies are least sensitive to hyper parameters on 8/14 benchmarks. based methods are generally more robust to hyperparameters than their policy-based counterparts. Table 9 in Appendix E.1 compares the speed (in terms of average elapsed time) of various max-Q solvers (MIP, GA, and CEM), with MIP clearly the most computationally intensive. We note that CAQL-MIP suffers from performance degradation in several high-dimensional environments with large action ranges (e.g., Ant [-0.25, 0 .25] and Humanoid [-0.25, 0.25] ). In these experiments, its performance is even worse than that of CAQL-GA or CAQL-CEM. We speculate that this is due to the fact that the small ReLU NN (32 × 16) doesn't have enough representation power to accurately model the Q-functions in more complex tasks, and therefore optimizing for the true max-Q value using an inaccurate function approximation impedes learning. Ablation Analysis We now study the effects of using dynamic tolerance, dual filtering, and clustering on CAQL via two ablation analyses. For simplicity, we experiment on standard benchmarks (with full action ranges), and primarily test CAQL-GA using an 2 loss. Default values on tolerance and maximum iteration are 1e-6 and 200, respectively. Table 3 shows how reducing the number of max-Q problems using dual filtering and clustering affects performance of CAQL. Dual filtering (DF) manages to reduce the number of max-Q problems (from 3.2% to 26.5% across different benchmarks), while maintaining similar performance with the unfiltered CAQL-GA. On top of dual filtering we apply clustering (C) to the set of inconclusive next states B df, in which the degree of approximation is controlled by the cluster radius. With a small cluster radius (e.g., b = 0.1), clustering further reduces max-Q solves without significantly impacting training performance (and in some cases it actually improves performance), though further increasing the radius would significant degrade performance. To illustrate the full trade-off of max-Q reduction versus policy quality, we also include the Dual method, which eliminates all max-Q computation with the dual approximation. Table 4 shows how dynamic tolerance influences the quality of CAQL policies. Compared with the standard algorithm, with a large tolerance (τ = 100) GA achieves a notable speed up (with only 1 step per max-Q optimization) in training but incurs a loss in performance. GA with dynamic tolerance atttains the best of both worlds-it significantly Table 3: Ablation analysis on CAQL-GA with dual filtering and clustering, where both the mean ± standard deviation of (95-percentile) final returns and the average %-max-Q-reduction (in parenthesis) are based on the best configuration. See Figure 5 in Appendix E for training curves. Table 4: Ablation analysis on CAQL-GA with dynamic tolerance, where both the mean ± standard deviation of (95-percentile) final returns and the average number of GA iterations (in parenthesis) are based on the best configuration. See Figure 7 in Appendix E for training curves. NOTE: In (*) the performance significantly drops after hitting the peak, and learning curve does not converge. reduces inner-maximization steps (from 29.5% to 77.3% across different problems and initial τ settings), while achieving good performance. Additionally, Table 5 shows the of CAQL-MIP with dynamic tolerance (i.e., optimality gap). This method significantly reduces both median and variance of the MIP elapsed time, while having better performance. Dynamic tolerance eliminates the high latency in MIP observed in the early phase of training (see Figure 1 and 2). Table 5: Ablation analysis on CAQL-MIP with dynamic tolerance, where both the mean ± standard deviation of (95-percentile) final returns and the (median, standard deviation) of the elapsed time κ (in msec) are based on the best configuration. See Figure 11 in Appendix E for training curves. We proposed Continuous Action Q-learning (CAQL), a general framework for handling continuous actions in value-based RL, in which the Q-function is parameterized by a neural network. While generic nonlinear optimizers can be naturally integrated with CAQL, we illustrated how the inner maximization of Q-learning can be formulated as mixed-integer programming when the Qfunction is parameterized with a ReLU network. CAQL (with action function learning) is a general Q-learning framework that includes many existing value-based methods such as QT-Opt and actorexpert. Using several benchmarks with varying degrees of action constraint, we showed that the policy learned by CAQL-MIP generally outperforms those learned by CAQL-GA and CAQL-CEM; and CAQL is competitive with several state-of-the-art policy-based RL algorithms, and often outperforms them (and is more robust) in heavily-constrained environments. Future work includes: extending CAQL to the full batch learning setting, in which the optimal Q-function is trained using only offline data; speeding up the MIP computation of the max-Q problem to make CAQL more scalable; and applying CAQL to real-world RL problems. Consider an MDP with states X, actions A, transition probability function P, discount factor γ ∈, reward function R, and initial state distribution β. We want to find an optimal Q-function by solving the following optimization problem: The formulation is based on the LP formulation of MDP (see for more details). Here the distribution p(x, a) is given by the data-generating distribution of the replay buffer B. (We assume that the replay buffer is large enough such that it consists of experience from almost all state-action pairs.) It is well-known that one can transform the above constrained optimization problem into an unconstrained one by applying a penalty-based approach (to the constraints). For simplicity, here we stick with a single constant penalty parameter λ ≥ 0 (instead of going for a state-action Lagrange multiplier and maximizing that), and a hinge penalty function (·) +. With a given penalty hyper-parameter λ ≥ 0 (that can be separately optimized), we propose finding the optimal Q-function by solving the following optimization problem: Furthermore, recall that in many off-policy and offline RL algorithms (such as DQN), samples in form of are independently drawn from the replay buffer, and instead of the optimizing the original objective function, one goes for its unbiased sample average approximation (SAA). However, viewing from the objective function of problem, finding an unbiased SAA for this problem might be challenging, due to the non-linearity of hinge penalty function (·) +. Therefore, alternatively we turn to study the following unconstrained optimization problem: Using the Jensen's inequality for convex functions, one can see that the objective function in is an upper-bound of that in. Equality of the Jensen's inequality will hold in the case when transition function is deterministic. (This is similar to the argument of PCL algorithm.) Using Jensen's inequality one justifies that optimization problem is indeed an eligible upper-bound optimization to problem. Recall that p(x, a) is the data-generation distribution of the replay buffer B. The unbiased SAA of problem is therefore given by where are the N samples drawn independently from the replay buffer. In the following, we will find the optimal Q function by solving this SAA problem. In general when the state and action spaces are large/uncountable, instead of solving the Q-function exactly (as in the tabular case), we turn to approximate the Q-function with its parametrized form Q θ, and optimize the set of real weights θ (instead of Q) in problem. Sample an initial state x 0 from the initial distribution 8: Select action a = clip(π w (x) + N (0, σ), l, u) 10: Execute action a and observe reward r and new state x +1 Store transition (x, a, r, x +1) in Replay Buffer R for s ← 1,..., S do CAQL Training; S = 20 by default 13: Sample a random minibatch B of |B| transitions {( Initialize the refined batches B df ← B and B c ← B; For each (x i, a i, r i, x i) ∈ B df ∩ B c, compute optimal action a i using OPT(DTol): and the corresponding TD targets: For each (x i, a i, r i, x i) ∈ B \ (B c ∩ B df), compute the approximate TD target: Update the Q-function parameters: Update the action function parameters: Update the target Q-function parameters: Decay the Gaussian noise: Recall that the Q-function NN has a nonlinear activation function, which can be viewed as a nonlinear equality constraint, according to the formulation in. To tackle this constraint, proposed a convex relaxation of the ReLU non-linearity. Specifically, first, they assume that for given x ∈ X and a ∈ B ∞ (a) such that z 1 = (x, a), there exists a collection of component-wise bounds (l j, u j), j = 2,..., K − 1 such that l j ≤ẑ j ≤ u j. As long as the bounds are redundant, adding these constraints into primal problem q * x does not affect the optimal value. Second, the ReLU non-linear equality constraint is relaxed using a convex outer-approximation. In particular, for a scalar input a within the real interval [l, u], the exact ReLU non-linearity acting on a is captured by the set Its convex outer-approximation is given by: Analogously to, define the relaxed NN equations as: where the third equation above is understood to be component-wise across layer j for each j ∈ {2, . . ., K − 1}, i.e., where n j is the dimension of hidden layer j. Using the relaxed NN equations, we now propose the following relaxed (convex) verification problem: where δ Λ (·) is the indicator function for set Λ (i.e., δ Λ (x) = 0 if x ∈ Λ and ∞ otherwise). Note that the indicator for the vector-ReLU in cost function above is understood to be component-wise, i.e., The optimal value of the relaxed problem, i.e.,q * x is an upper bound on the optimal value for original problem q * x i. Thus, the certification one can obtain is the following: ifq * x ≤ (Q θ (x, a) − r)/γ, then the sample (x, a, x) is discarded for inner maximization. However, ifq * x > (Q θ (x, a) − r)/γ, the sample (x, a, x) may or may not have any contribution to the TD-error in the hinge loss function. To further speed up the computation of the verification problem, by looking into the dual variables of problem, in the next section we propose a numerically efficient technique to estimate a suboptimal, upper-bound estimate toq * x, namely q x. Therefore, one verification criterion on whether a sample drawn from replay buffer should be discarded for inner-maximization is check whether the following inequality holds: C.1 SUB-OPTIMAL SOLUTION TO THE RELAXED PROBLEM In this section, we detail the sub-optimal lower bound solution to the relaxed problem in as proposed in. Let ν j, j = 2,..., K denote the dual variables for the linear equality constraints in problem. The Lagrangian for the relaxed problem in is given by: Defineν j: = W j ν j+1 for j = 1,..., K − 1, and defineν Recall that for a real vector space X ⊆ R n, let X * denote the dual space of X with the standard pairing ·, ·: X × X * → R. For a real-valued function f: X → R ∪ {∞, −∞}, let f *: X * → R ∪ {∞, −∞} be its convex conjugate, defined as: f * (y) = − inf x∈X (f (x) − y, x ) = sup x∈X (y, x − f (x)), ∀y ∈ X *. Therefore, the conjugate for the vector-ReLU indicator above takes the following component-wise structure: Now, the convex conjugate of the set indicator function is given by the set support function. Thus, where · q is the l p -dual norm defined by the identity 1/p + 1/q = 1. To compute the convex conjugate for the ReLU relaxation, we analyze the scalar definition as provided in. Specifically, we characterize δ H (l,u) (p, q) defined by the scalar bounds (l, u), for the dual vector (p, q) ∈ R 2. There exist 3 possible cases: Case I: l < u ≤ 0: Case II: 0 ≤ l < u: Case III: l < 0 < u: For this case, the sup will occur either on the line −ux + (u − l)y = −ul or at the origin. Thus, Applying these in context of equation, we calculate the Lagrange multipliers by considering the following cases. Case I: l j (s) < u j (s) ≤ 0: In this case, since z j (s) = 0 regardless of the value ofẑ j (s), one can simply remove these variables from problems and by eliminating the j th row of W j−1 and b j−1 and the j th column of W i. Equivalently, from, one can remove their contribution in by setting ν j (s) = 0. Case II: 0 ≤ l j (s) < u j (s): In this case, the ReLU non-linearity for (ẑ j (s), z j (s)) in problems and may be replaced with the convex linear equality constraint z j (s) =ẑ j (s) with associated dual variable µ. Within the Lagrangian, this would in a modification of the term Minimizing this over (ẑ j (s), z j (s) ), a non-trivial lower bound (i.e., 0) is obtained only if ν j (s) = ν j (s) = µ. Equivalently, from, we set ν j (s) =ν j (s). Case III: For the non-trivial third case, where l j (s) < 0 < u j (s), notice that due toν, the dual function g(ν) is not decoupled across the layers. In order to get a sub-optimal, but analytical solution to the dual optimization, we will optimize each term within the first sum in independently. To do this, notice that the quantity in sub-case I in is strictly greater than the other two sub-cases. Thus, the best bound is obtained by using the third sub-case, which corresponds to setting:. Combining all the previous analysis, we now calculate the dual of the solution to problem. Let I Using the above case studies, a sub-optimal (upper-bound) dual solution to the primal solutionJ x in problem is given by where ν is defined by the following recursion, termed the "dual" network: and D j is a diagonal matrix with C.2 COMPUTING PRE-ACTIVATION BOUNDS For k ∈ {3, . . ., K − 1}, define the k−partial NN as the set of equations: Finding the lower bound l k forẑ k involves solving the following problem: where e s is a one-hot vector with the non-zero element in the s-th entry, for s ∈ {1, . . ., n k}. Similarly, we obtain u k by maximizing the objective above. Assuming we are given bounds {l j, u j} k−1 j=2, we can employ the same convex relaxation technique and approximate dual solution as for the verification problem (since we are simply optimizing a linear function of the output of the first k layers of the NN). Doing this recursively allows us to compute the bounds {l j, u j} for j = 3,..., K − 1. The recursion is given in Algorithm 1 in and is based on the matrix form of the recursion in, i.e., with c replaced with I and −I, so that the quantity in We use a two hidden layer neural network with ReLU activation (32 units in the first layer and 16 units in the second layer) for both the Q-function and the action function. The input layer for the Q-function is a concatenated vector of state representation and action variables. The Q-function has a single output unit (without ReLU). The input layer for the action function is only the state representation. The output layer for the action function has d units (without ReLU), where d is the action dimension of a benchmark environment. We use SCIP 6.0.0 for the MIP solver. A time limit of 60 seconds and a optimality gap limit of 10 −4 are used for all experiments. For GA and CEM, a maximum iterations of 20 and a convergence threshold of 10 are used for all experiments. E ADDITIONAL EXPERIMENTAL E.1 OPTIMIZER SCALABILITY Table 9 shows the average elapsed time of various optimizers computing max-Q in the experiment setup described in Appendix D. MIP is more robust to action dimensions than GA and CEM. MIP latency depends on the state of neural network weights. It takes longer time with highly dense NN weights, but on the other hand, it can be substantially quicker with sparse NN weights. Figure 1 shows the average elapsed time of MIP over training steps for various benchmarks. We have observed that MIP is very slow in the beginning of the training phase but it quickly becomes faster. This trend is observed for most benchmarks except Humanoid. We speculate that the NN weights for the Q-function are dense in the beginning of the training phase, but it is gradually structurized (e.g, sparser weights) so that it becomes an easier problem for MIP. Table 9: The (median, standard deviation) for the average elapsed time κ (in msec) of various solvers computing max-Q problem. (n) Humanoid [-0.25,0.25] Figure 4: The mean return over all 320 configurations (32 hyper parameter combinations × 10 random seeds). Shaded area is ± standard deviation. Data points are average over a sliding window of size 3. The length of an episode is limited to 200 steps. Table 11: Ablation analysis on CAQL-GA with dynamic tolerance, where both the mean ± standard deviation of (95-percentile) final returns and the average number of GA iterations (in parenthesis) are over all 320 configurations. See Figure 8 in Appendix E for training curves. Table 13: The mean ± standard deviation of (95-percentile) final returns over all 320 configurations (32 hyper parameter combinations × 10 random seeds). The full training curves are given in Figure 10 in Appendix E.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkxXe0Etwr
A general framework of value-based reinforcement learning for continuous control