source
stringlengths
273
149k
source_labels
sequence
paper_id
stringlengths
9
11
target
stringlengths
18
668
Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks. One particular is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum. In the past decade, deep neural networks BID8 have become a popular tool that has successfully solved many challenging tasks in a variety of areas such as machine learning, artificial intelligence, computer vision, and natural language processing, etc. As the understandings of deep neural networks from different aspects are mostly based on empirical studies, there is a rising need and interest to develop understandings of neural networks from theoretical aspects such as generalization error, representation power, and landscape (also referred to as geometry) properties, etc. In particular, the landscape properties of loss functions (that are typically nonconex for neural networks) play a central role to determine the iteration path and convergence performance of optimization algorithms. One major landscape property is the nature of critical points, which can possibly be global minima, local minima, saddle points. There have been intensive efforts in the past into understanding such an issue for various neural networks. For example, it has been shown that every local minimum of the loss function is also a global minimum for shallow linear networks under the autoencoder setting and invertibility assumptions BID1 and for deep linear networks BID11; BID14; respectively under different assumptions. The conditions on the equivalence between local minimum or critical point and global minimum has also been established for various nonlinear neural networks; BID9; BID15; BID17; BID6 under respective assumptions. However, most previous studies did not provide characterization of analytical forms for critical points of loss functions for neural networks with only very few exceptions. In BID1, the authors provided an analytical form for the critical points of the square loss function of shallow linear networks under certain conditions. Such an analytical form further helps to establish the landscape properties around the critical points. Further in BID13, the authors characterized certain sufficient form of critical points for the square loss function of matrix factorization problems and deep linear networks. The focus of this paper is on characterizing the sufficient and necessary forms of critical points for broader scenarios, i.e., shallow and deep linear networks with no assumptions on data matrices and network dimensions, and shallow ReLU networks over certain parameter space. In particular, such analytical forms of critical points capture the corresponding loss function values and the necessary and sufficient conditions to achieve global minimum. This further enables us to establish new landscape properties around these critical points for the loss function of these networks under general settings, and provides alternative (yet simpler and more intuitive) proofs for existing understanding of the landscape properties. OUR CONTRIBUTION 1) For the square loss function of linear networks with one hidden layer, we provide a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers. These generalize the characterization in BID1 to arbitrary network parameter dimensions and any data matrices. Such a generalization further enables us to establish the landscape property, i.e., every local minimum is also a global minimum and all other critical points are saddle points, under no assumptions on parameter dimensions and data matrices. From a technical standpoint, we exploit the analytical forms of critical points to provide a new proof for characterizing the landscape around the critical points under full relaxation of assumptions, where the corresponding approaches in BID1 are not applicable. As a special case of linear networks, the matrix factorization problem satisfies all these landscape properties.2) For the square loss function of deep linear networks, we establish a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers. Such characterizations are new and have not been established in the existing art. Furthermore, such analytical form divides the set of non-global-minimum critical points into different categories. We identify the directions along which the loss function value decreases for two categories of the critical points, for which our directly implies the equivalence between the local minimum and the global minimum. For these cases, our proof generalizes the in BID11 under no assumptions on the network parameter dimensions and data matrices.3) For the square loss function of one-hidden-layer nonlinear neural networks with ReLU activation function, we provide a full characterization of both the existence and the analytical forms of the critical points in certain types of regions in the parameter space. Particularly, in the case where there is one hidden unit, our fully characterize the existence and the analytical forms of the critical points in the entire parameter space. Such characterization were not provided in previous work on nonlinear neural networks. Moreover, we apply our to a concrete example to demonstrate that both local minimum that is not a global minimum and local maximum do exist in such a case. Analytical forms of critical points: Characterizing the analytical form of critical points for loss functions of neural networks dates back to BID1, where the authors provided an analytical form of the critical points for the square loss function of linear networks with one hidden layer. In BID13, the authors provided a sufficient condition of critical points of a generic function, i.e., the fixed point of invariant groups. They then characterized certain sufficient forms of critical points for the square loss function of matrix factorization problems and deep linear networks, whereas our provide sufficient and necessary forms of critical points for deep linear networks via a different approach. Properties of critical points: BID1; BID0 studied the linear autoencoder with one hidden layer and showed the equivalence between the local minimum and the global minimum. Moreover, BID2 generalized these to the complex-valued autoencoder setting. The deep linear networks were studied by some recent work BID11; BID14 , in which the equivalence between the local minimum and the global minimum was established respectively under different assumptions. established a necessary and sufficient condition for a critical point of the deep linear network to be a global minimum. A similar was established in BID7 for deep linear networks under the setting that the widths of intermediate layers are larger than those of the input and output layers. The effect of regularization on the critical points for a two-layer linear network was studied in.For nonlinear neural networks, studied a nonlinear neural network with one hidden layer and sigmoid activation function, and showed that every local minimum is also a global minimum provided that the number of input units equals the number of data samples. BID9 considered a class of multi-layer nonlinear networks with a pyramidal structure, and showed that all critical points of full column rank achieve the zero loss when the sample size is less than the input dimension. These were further generalized to a larger class of nonlinear networks in BID15, in which they also showed that critical points with non-degenerate Hessian are global minimum. BID3 b) connected the loss surface of deep nonlinear networks with the Hamiltonian of the spin-glass model under certain assumptions and characterized the distribution of the local minimum. BID11 further eliminated some of the assumptions in BID3, and established the equivalence between the local minimum and the global minimum by reducing the loss function of the deep nonlinear network to that of the deep linear network. BID17 showed that a two-layer nonlinear network has no bad differentiable local minimum. BID6 studied a one-hidden-layer nonlinear neural network with the parameters restricted in a set of directions of lines, and showed that most local minima are global minima. considered a two-layer ReLU network with Gaussian input data, and showed that critical points in certain region are non-isolated and characterized the critical-point-free regions. Geometric curvature BID10 established the gradient dominance condition of deep linear residual networks, and further established the gradient dominance condition and regularity condition around the global minimizers for deep linear, deep linear residual and shallow nonlinear networks. BID12 studied the property of the Hessian matrix for deep linear residual networks. The local strong convexity property was established in BID16 for overparameterized nonlinear networks with one hidden layer and quadratic activation functions, and was established in for a class of nonlinear networks with one hidden layer and Gaussian input data. further established the local linear convergence of gradient descent method with tensor initialization. BID18 studied a one-hidden-layer nonlinear network with a single output, and showed that the volume of sub-optimal differentiable local minima is exponentially vanishing in comparison with the volume of global minima. BID5 investigated the saddle points in deep neural networks using the from statistical physics and random matrix theory. Notation: The pseudoinverse, column space and null space of a matrix M are denoted by M †, col(M) and ker(M), respectively. For any index sets I, J ⊂ N, M I,J denotes the submatrix of M formed by the entries with the row indices in I and the column indices in J. For positive integers i ≤ j, we define i: j = {i, i + 1, . . ., j − 1, j}. The projection operator onto a linear subspace V is denoted by P V. In this section, we study linear neural networks with one hidden layer. Suppose we have an input data matrix X ∈ R d0×m and a corresponding output data matrix Y ∈ R d2×m, where there are in total m data samples. We are interested in learning a model that maps from X to Y via a linear network with one hidden layer. Specifically, we denote the weight parameters between the output layer and the hidden layer of the network as A 2 ∈ R d2×d1, and denote the weight parameters between the hidden layer and the input layer of the network as A 1 ∈ R d1×d0. We are interested in the square loss function of this linear network, which is given by DISPLAYFORM0 Note that in a special case where X = I, L reduces to a loss function for the matrix factorization problem, to which all our apply. The loss function L has been studied in BID1 under the assumptions that d 2 = d 0 ≥ d 1 and the matrices XX, Y X (XX) −1 XY are invertible. In our study, no assumption is made on either the parameter dimensions or the invertibility of the data matrices. Such full generalization of the in BID1 turns out to be critical for our study of nonlinear shallow neural networks in Section 4.We further define Σ:= Y X † XY and denote its full singular value decomposition as U ΛU. Suppose that Σ has r distinct positive singular values σ 1 > · · · > σ r > 0 with multiplicities m 1,..., m r, respectively, and hasm zero singular values. Recall that DISPLAYFORM1 Our first provides a full characterization of all critical points of L. Theorem 1 (Characterization of critical points). All critical points of L are necessarily and sufficiently characterized by a matrix L 1 ∈ R d1×d0, a block matrix V ∈ R d2×d1 and an invertible matrix C ∈ R d1×d1 via DISPLAYFORM2 DISPLAYFORM3, where both V i ∈ R mi×pi and V ∈ Rm ×p consist of orthonormal columns with the number of columns DISPLAYFORM4 Theorem 1 characterizes the necessary and sufficient forms for all critical points of L. Intuitively, the matrix C captures the invariance of the product A 2 A 1 under an invertible transform, and L 1 captures the degree of freedom of the solution set for linear systems. In general, the set of critical points is uncountable and cannot be fully listed out. However, the analytical forms in eqs. and do allow one to construct some critical points of L by specifying choices of L 1, V, C that fulfill the condition in eq.. For example, choosing L 1 = 0 guarantees eq., in which case eqs. and yield a critical point (C −1 V U Y X †, U V C) for any invertible matrix C and any block matrix V that takes the form specified in Theorem 1. For nonzero L 1, one can fix a proper V and solve the linear equation on C in eq.. If a solution exists, we then obtain the form of a corresponding critical point. We further note that the analytical structures of the critical points are more important, which have direct implications on the global optimality conditions and landscape properties as we show in the remaining part of the section. Remark 1. We note that the block pattern parameters {p i} r i=1 andp denote the number of columns of {V i} r i=1 and V, respectively, and their sum equals the rank of A 2, i.e., DISPLAYFORM5 The parameters p i, i = 1,..., r,p of V contain all useful information of the critical points that determine the function value of L as presented in the following proposition. DISPLAYFORM6 Proposition 1 evaluates the function value L at a critical point using the parameters {p i} r i=1. To explain further, recall that the data matrix Σ has each singular value σ i with multiplicity m i. For each i, the critical point captures p i out of m i singular values σ i. Hence, for a σ i with larger value (i.e., a smaller index i), it is desirable that a critical point captures a larger number p i of them. In this way, the critical point captures more important principle components of the data so that the value of the loss function is further reduced as suggested by Proposition 1. In summary, the parameters {p i} r i=1 characterize how well the learned model fits the data in terms of the value of the loss function. Moreover, the parameters {p i} r i=1 also determine a full characterization of the global minimizers as given below. Proposition 2 (Characterization of global minimizers). A critical point (A 1, A 2) of L is a global minimizer if and only if it falls into the following two cases. DISPLAYFORM7 The analytical form of any global minimizer can be obtained from Theorem 1 with further specification to the above two cases. Proposition 2 establishes the neccessary and sufficient conditions for any critical point to be a global minimizer. If the data matrix Σ has a large number of nonzero singular values, i.e., the first case, one needs to exhaust the representation budget (i.e., rank) of A 2 and capture as many large singular values as the rank allows to achieve the global minimum; Otherwise, A 2 of a global minimizer can be non-full rank and still captures all nonzero singular values. Note that A 2 must be full rank in the case 1, and so is A 1 if we further adopt the assumptions on the network size and data matrices in BID1. Furthermore, the parameters {p i} r i=1 naturally divide all non-global-minimum critical points (A 1, A 2) of L into the following two categories.• (Non-optimal order): The matrix V specified in Theorem 1 satisfies that there exists 1 ≤ i < j ≤ r such that p i < m i and p j > 0.• (Optimal order): rank(A 2) < min{d 2, d 1} and the matrix V specified in Theorem 1 satisfies that DISPLAYFORM8 To understand the above two categories, note that a critical point of L with non-optimal order captures a smaller singular value σ j (since p j > 0) while skipping a larger singular value σ i with a lower index i < j (since p i < m i), and hence cannot be a global minimizer. On the other hand, although a critical point of L with optimal order captures the singular values in the optimal (i.e., decreasing) order, it does not fully utilize the representation budget of A 2 (because A 2 is non-full rank) to further capture nonzero singular values and reduce the function value, and hence cannot be a global minimizer either. Next, we show that these two types of non-global-minimum critical points have different landscape properties around them. Throughout, a matrix M is called the perturbation of M if it lies in an arbitrarily small neighborhood of M.Proposition 3 (Landscape around critical points). The critical points of L have the following landscape properties.1. A non-optimal-order critical point (A 1, A 2) has a perturbation (A 1, A 2) with rank(A 2) = rank(A 2), which achieves a lower function value; 2. An optimal-order critical point (A 1, A 2) has a perturbation (A 1, A 2) with rank(A 2) = rank(A 2) + 1, which achieves a lower function value; 3. Any point in X:= {(A 1, A 2): A 2 A 1 X = 0} has a perturbation (A 1, A 2), which achieves a higher function value;As a consequence, items 1 and 2 imply that any non-global-minimum critical point has a descent direction, and hence cannot be a local minimizer. Thus, any local minimizer must be a global minimizer. Item 3 implies that any point has an ascent direction whenever the output is nonzero. Hence, there does not exist any local/global maximizer in X. Furthermore, item 3 together with items 1 and 2 implies that any non-global-minimum critical point in X has both descent and ascent directions, and hence must be a saddle point. We summarize these facts in the following theorem. Theorem 2 (Landscape of L). The loss function L satisfies: 1) every local minimum is also a global minimum; 2) every non-global-minimum critical point in X is a saddle point. We note that the saddle points in Theorem 2 can be non-strict when the data matrices are singular. As an illustrative example, consider the following loss function of a shallow linear network L(a 2, a 1) = 1 2 (a 2 a 1 x − y) 2, where a 1, a 2, x and y are all scalars. Consider the case y = 0. Then, the Hessian at the saddle point a 1 = 0, a 2 = 1 is [x 2, 0; 0, 0], which does not have any negative eigenvalue. From a technical point of view, the proof of item 1 of Proposition 3 applies that in BID0 and generalizes it to the setting where Σ can have repeated singular values and may not be invertible. To further understand the perturbation scheme from a high level perspective, note that non-optimalorder critical points capture a smaller singular value σ j instead of a larger one σ i with i < j. Thus, one naturally perturbs the singular vector corresponding to σ j along the direction of the singular vector corresponding to σ i. Such a perturbation scheme preserves the rank of A 2 and reduces the value of the loss function. More importantly, the proof of item 2 of Proposition 3 introduces a new technique. As a comparison, BID1 proves a similar as item 2 using the strict convexity of the function, which requires the parameter dimensions to satisfy d 2 = d 0 ≥ d 1 and the data matrices to be invertible. In contrast, our proof completely removes these restrictions by introducing a new perturbation direction and exploiting the analytical forms of critical points in eqs. and and the condition in eq.. The accomplishment of the proof further requires careful choices of perturbation parameters as well as judicious manipulations of matrices. We refer the reader to the supplemental materials for more details. As a high level understanding, since optimal-order critical points capture the singular values in an optimal (i.e., decreasing) order, the previous perturbation scheme for non-optimal-order critical points does not apply. Instead, we increase the rank of A 2 by one in a way that the perturbed matrix captures the next singular value beyond the ones that have already been captured so that the value of the loss function can be further reduced. In this section, we study deep linear networks with ≥ 2 layers. We denote the weight parameters between the layers as A k ∈ R d k ×d k−1 for k = 1,...,, respectively. The input and output data are denoted by X ∈ R d0×m, Y ∈ R d ×m, respectively. We are interested in the square loss function of deep linear networks, which is given by DISPLAYFORM0, respectively, andm(k) zero singular values. Our first provides a full characterization of all critical points of L D, where we denote DISPLAYFORM1 Theorem 3 (Characterization of critical points). All critical points of L D are necessarily and sufficiently characterized by matrices DISPLAYFORM2.., A can be individually expressed out recursively via the following two equations: DISPLAYFORM3 DISPLAYFORM4 Note that the forms of the individual parameters A 1,..., A can be obtained as follows by recursively applying eqs. and. First, eq. with k = 0 yields the form of A (,2). Then, eq. with k = 0 and the form of A (,2) yield the form of A 1. Next, eq. with k = 1 yields the form of A (,3), and then, eq. with k = 1 and the forms of A (,3), A 1 further yield the form of A 2. Inductively, one obtains the expressions of all individual parameter matrices. Furthermore, the first condition in eq. FORMULA13 is a consistency condition that guarantees that the analytical form for the entire product of parameter matrices factorizes into the forms of individual parameter matrices. Similarly to shallow linear networks, while the set of critical points here is also uncountable, Theorem 3 suggests ways to obtain some critical points. For example, if we set L k = 0 for all k (i.e., eq. is satisfied), we can obtain the form of critical points for any invertible C k and proper V k with the structure specified in Theorem 3. For nonzero L k, eq. needs to be verified for given C k and V k to determine a critical point. Similarly to shallow linear networks, the parameters {p i} r i=1,p determine the value of the loss function at the critical points and further specify the analytical form for the global minimizers, as we present in the following two propositions. DISPLAYFORM5 DISPLAYFORM6 In particular, A (,2) can be non-full rank with rank(A (,2) ) = DISPLAYFORM7 The analytical form of any global minimizer can be obtained from Theorem 3 with further specification to the above two cases. In particular for case 1, if we further adopt the invertibility assumptions on data matrices as in BID1 and assume that all parameter matrices are square, then all global minima must correspond to full rank parameter matrices. We next exploit the analytical forms of the critical points to further understand the landscape of the loss function L D. It has been shown in BID11 that every local minimum of L D is also a global minimum, under certain conditions on the parameter dimensions and the invertibility of the data matrices. Here, our characterization of the analytical forms for the critical points allow us to understand such a from an alternative viewpoint. The proofs for certain cases (that we discuss below) are simpler and more intuitive, and no assumption is made on the data matrices and dimensions of the network. Similarly to shallow linear networks, we want to understand the local landscape around the critical points. However, due to the effect of depth, the critical points of L D are more complicated than those of L. Among them, we identify the following subsets of the non-global-minimum critical DISPLAYFORM8 • (Deep-non-optimal order): There exist 0 ≤ k ≤ − 2 such that the matrix V k specified in Theorem 3 satisfies that there exist 1 ≤ i < j ≤ r(k) such that p i (k) < m i (k) and p j (k) > 0.• (Deep-optimal order): (A, A −1) is not a global minimizer of L D with A (−2,1) being fixed, rank(A) < min{d, d −1}, and the matrix V −2 specified in Theorem 3 satisfies that DISPLAYFORM9 The following summarizes the landscape of L D around the above two types of critical points. The loss function L D has the following landscape properties. deep-non-optimal-order critical point (A 1, . . ., A) has a perturbation (A 1, . . ., A k+1, . . ., A) with rank(A) = rank(A), which achieves a lower function value. 2. A deep-optimal-order critical point (A 1, . . ., A) has a perturbation (A 1, . . ., A −1, A) with rank(A) = rank(A) + 1, which achieves a lower function value. 3. Any point in X D:= {(A 1, . . ., A): A (,1) X = 0} has a perturbation (A 1, . . ., A) that achieves a higher function value. Consequently, 1) every local minimum of L D is also a global minimum for the above two types of critical points; and 2) every critical point of these two types in X D is a saddle point. Theorem 4 implies that the landscape of L D for deep linear networks is similar to that of L for shallow linear networks, i.e., the pattern of the parameters {p i (k)} r(k) i=1 implies different descent directions of the function value around the critical points. Our approach does not handle the remaining set of non-global minimizers, i.e., there exists q ≤ −1 such that (A, . . ., A q) is a global minimum point of L D with A (q−1,1) being fixed, and A (,q) is of optimal order. It is unclear how to perturb the intermediate weight parameters using their analytical forms for deep networks, and we leave this as an open problem for the future work. In this section, we study nonlinear neural networks with one hidden layer. In particular, we consider nonlinear networks with ReLU activation function σ: R → R that is defined as σ(x):= max{x, 0}. Our study focuses on the set of differentiable critical points. The weight parameters between the layers are denoted by A 2 ∈ R d2×d1, A 1 ∈ R d1×d0, respectively, and the input and output data are denoted by X ∈ R d0×m, Y ∈ R d2×m, respectively. We are interested in the square loss function which is given by DISPLAYFORM0 where σ acts on A 1 X entrywise. Existing studies on nonlinear networks characterized the sufficient conditions for critical points being global minimum BID9 Since the activation function σ is piecewise linear, the entire parameter space can be partitioned into disjoint cones. In particular, we consider the set of cones K I×J where I ⊂ {1, . . ., d 1}, J ⊂ {1, . . ., m} that satisfy DISPLAYFORM1 where "≥" and "<" represent entrywise comparisons. Within K I×J, the term σ(A 1 X) activates only the entries σ(A 1 X) I:J, and the corresponding loss function L N is equivalent to DISPLAYFORM2 Hence, within K I×J, L N reduces to the loss of a shallow linear network with parameters ((A 2):,I, (A 1) I,: ) and input & output data pair (X :,J, Y :,J). Note that our on shallow linear networks in Section 2 are applicable to all parameter dimensions and data matrices. Thus, Theorem 1 fully characterizes the forms of critical points of L N in K I×J. Moreover, the existence of such critical points can be analytically examined by substituting their forms into eq.. In summary, we obtain the following , where we denote Σ J:= Y:,J X †:,J X:,J Y:,J with the full singular value decomposition U J Λ J U J, and suppose that Σ J has r(J) distinct positive singular values σ 1 (J) > · · · > σ r(J) (J) with multiplicities m 1,..., m r(J), respectively, andm(J) zero singular values. Proposition 6 (Characterization of critical points). All critical points of L N in K I×J for any I ⊂ {1, . . ., d 1}, J ⊂ {1, . . ., m} are necessarily and sufficiently characterized by an L 1 ∈ R |I|×d0, a block matrix V ∈ R d2×|I| and an invertible matrix C ∈ R |I|×|I| such that DISPLAYFORM3 DISPLAYFORM4 ×p consist of orthonormal columns with p i ≤ m i for i = 1,..., r(J),p ≤m such that DISPLAYFORM5 Moreover, a critical point in K I×J exists if and only if there exists such C, V, L 1 that DISPLAYFORM6 Other entries of A 1 X < 0.To further illustrate, we consider a special case where the nonlinear network has one unit in the hidden layer, i.e., d 1 = 1, in which case A 1 and A 2 are row and column vectors, respectively. Then, the entire parameter space can be partitioned into disjoint cones taking the form of K I×J, and I = {1} is the only nontrivial choice. We obtain the following from Proposition 6.Proposition 7 (Characterization of critical points). Consider L N with d 1 = 1 and any J ⊂ {1, . . ., m}. Then, any nonzero critical point of L N within K {1}×J can be necessarily and sufficiently characterized by an 1 ∈ R 1×d0, a block unit vector v ∈ R d2×1 and a scalar c ∈ R such that DISPLAYFORM7 Specifically, v is a unit vector that is supported on the entries corresponding to the same singular value of Σ J. Moreover, a nonzero critical point in K {1}×J exists if and only if there exist such c, v, 1 that satisfy DISPLAYFORM8 DISPLAYFORM9 We note that Proposition 7 characterizes both the existence and the forms of critical points of L N over the entire parameter space for nonlinear networks with a single hidden unit. The condition in eq. FORMULA24 is guaranteed because P ker(v) = 0 for v = 0.To further understand Proposition 7, suppose that there exists a critical point in K {1}×J with v being supported on the entries that correspond to the i-th singular value of Σ J. Then, Proposition 1 implies that DISPLAYFORM10 In particular, the critical point achieves the local minimum DISPLAYFORM11. This is because in this case the critical point is full rank with an optimal order, and hence corresponds to the global minimum of the linear network in eq.. Since the singular values of Σ J may vary with the choice of J, L N may achieve different local minima in different cones. Thus, local minimum that is not global minimum can exist for L N. The following proposition concludes this fact by considering a concrete example. Proposition 8. For one-hidden-layer nonlinear neural networks with ReLU activation function, there exists local minimum that is not global minimum, and there also exists local maximum. FORMULA13 and FORMULA19 hold if c −1 (v) 1,: ≥ 0, 1,: < 0. Similarly to the previous case, choosing c = 1, v =, 1 = (−1, 0) yields a local minimum that achieves the function value L n = 2. Hence, local minimum that is not global minimum does exist. Moreover, in the cone K I×J with I = {1}, J = ∅, the function L N remains to be the constant 5 2, and all points in this cone are local minimum or local maximum. Thus, the landscape of the loss function of nonlinear networks is very different from that of the loss function of linear networks. In this paper, we provide full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks, namely, shallow linear networks, deep linear networks, and shallow ReLU nonlinear networks. We show that such analytical forms of the critical points have direct implications on the values of the corresponding loss functions, achievement of global minimum, and various landscape properties around these critical points. As a consequence, the loss function for linear networks has no spurious local minimum, while such point does exist for nonlinear networks with ReLU activation. In the future, it is interesting to further explore nonlinear neural networks. In particular, we wish to characterize the analytical form of critical points for deep nonlinear networks and over the full parameter space. Such will further facilitate the understanding of the landscape properties around these critical points. Notations: For any matrix M, denote vec(M) as the column vector formed by stacking its columns. Denote the Kronecker product as "⊗". Then, the following useful relationships hold for any dimension compatible matrices M, U, V, W: DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Recall that a point DISPLAYFORM4 DISPLAYFORM5 We first prove eqs. and. DISPLAYFORM6 Next, we derive the form of A 2. Recall the full singular value decomposition Σ = U ΛU, where Λ is a diagonal matrix with distinct singular values σ 1 >... > σ r > 0 and multiplicities m 1,..., m r, respectively. We also assume that there arem number of zero singular values in Λ. Using the fact that P col(A2) = U P col(U A2) U, the last equality in eq. reduces to DISPLAYFORM7 By the multiplicity pattern of the singular values in Λ, P col(U A2) must be block diagonal. Specifically, we can write P col(U A2) = diag(P 1, . . ., P r, P), where P i ∈ R mi×mi and P ∈ Rm ×m.Also, since P col(U A2) is a projection, P 1,..., P r, P must all be projections. Note that P col(U A2) has rank rank(A 2), and suppose that P 1,..., P r, P have ranks p 1,..., p r,p, respectively. Then, we must have p i ≤ m i for i = 1,..., r,p ≤m and r i=1 p i +p = rank(A 2). Also, note that each projection can be expressed as P i = V i V i with V i ∈ R mi×pi, V ∈ Rm ×p consisting of orthonormal columns. Hence, we can write P col(U A2) = V V where V = diag(V 1, . . ., V r, V). We then conclude that P col(A2) = U P col(U A2) U = U V V U. Thus, A 2 has the same column space as U V, and there must exist an invertible matrix DISPLAYFORM8 Then, plugging A † 2 = C −1 V U into eq. yields the desired form of A 1.We now prove eq.. Note that the above proof is based on the equations DISPLAYFORM9 Hence, the forms of A 1, A 2 in eqs. and need to further satisfy ∇ A2 L = 0. By eq. FORMULA19 and the form of A 2, we obtain that DISPLAYFORM10 This expression, together with the form of A 1 in eq., implies that DISPLAYFORM11 where (i) uses the fact that X † XX = X, (ii) uses the fact that the block pattern of V is compatible with the multiplicity pattern of the singular values in Λ, and hence V V ΛV = ΛV. On the other hand, we also obtain that DISPLAYFORM12 Thus, to satisfy ∇ A2 L = 0 in eq. FORMULA12, we require that DISPLAYFORM13 which is equivalent to DISPLAYFORM14 Lastly, note that (I − U V (U V) ) = P col(U V) ⊥, and (I − V V) = P ker(V), which concludes the proof. By expansion we obtain that L = DISPLAYFORM0. Consider any (A 1, A 2) that satisfies eq. FORMULA4, we have shown that such a point also satisfies eq., which further yields that DISPLAYFORM1 where (i) follows from the fact that Tr(P col(A2) Σ P col(A2) ) = Tr(P col(A2) Σ), and (ii) uses the fact that P col(A2) = U P col(U A2) U. In particular, a critical point (A 1, A 2) satisfies eq.. Moreover, using the form of the critical point A 2 = U V C, eq. FORMULA20 further becomes DISPLAYFORM2 where (i) is due to P col(V C) = P col(V) = V V, and (ii) utilizes the block pattern of V and the multiplicity pattern of Λ that are specified in Theorem 1.: Consider a critical point (A 1, A 2) with the forms given by Theorem 1. By choosing L 1 = 0, the condition in eq. FORMULA4 is guaranteed. Then, we can specify a critical point with any V that satisfies the block pattern specified in Theorem 1, i.e., we can choose any p i, i = 1,..., r,p such that p i ≤ m i for i = 1,..., r,p ≤m and DISPLAYFORM0 m i, the global minimum value is achieved by a full rank A 2 with rank(A 2) = min{d 2, d 1} and DISPLAYFORM1 That is, the singular values are selected in a decreasing order to minimize the function value.: If (A 2, A 1) is a global minimizer and min{d y, d} > r i=1 m i, the global minimum can be achieved by choosing p i = m i for all i = 1,..., r andp ≥ 0. In particular, we do not need a full rank A 2 to achieve the global minimum. For example, we can choose rank(A 2) = r i=1 m i < min{d y, d} with p i = m i for all i = 1,..., r andp = 0. We first prove item 1. Consider a non-optimal-order critical point (A 1, A 2). By Theorem 1, we can write A 2 = U V C where V = [diag(V 1, . . ., V r, V), 0] and V i, i = 1,..., r, V consist of orthonormal columns. Define the orthonormal block diagonal matrix Since (A 1, A 2) is a non-optimal-order critical point, there exists 1 ≤ i < j ≤ r such that p i < m i and p j > 0. Then, consider the following perturbation of U S for some > 0. DISPLAYFORM0 DISPLAYFORM1 with which we further define the perturbation matrix A 2 = M S V C. Also, let the perturbation matrix A 1 be generated by eq. with U ← M and V ← S V. Note that with this construction, (A 1, A 2) satisfies eq., which further implies eq. for (A 1, A 2), i.e., A 2 A 1 X = P col(A2) Y X † X. Thus, eq. holds for the point (A 1, A 2), and we obtain that DISPLAYFORM2 where the last equality uses the fact that S ΛS = Λ, as can be observed from the block pattern of S and the multiplicity pattern of Λ. Also, by the construction of M and the form of S V, a careful calculation shows that only the i, j-th diagonal elements of P col(S U M S V) have changed, i.e., DISPLAYFORM3 As the index i, j correspond to the singular values σ i, σ j, respectively, and σ i > σ j, one obtain that DISPLAYFORM4 Thus, the construction of the point (A 2, A 1) achieves a lower function value for any > 0. Letting → 0 and noticing that M is a perturbation of U S, the point (A 2, A 1) can be in an arbitrary neighborhood of (A 2, A 1). Lastly, note that rank(A 2) = rank(A 2). This completes the proof of item 1.Next, we prove item 2. Consider an optimal-order critical point (A 1, A 2). Then, A 2 must be non-full rank, since otherwise a full rank A 2 with optimal order corresponds to a global minimizer by Proposition 2. Since there exists some k ≤ r such that 0]. Using this expression, eq. yields that DISPLAYFORM5 DISPLAYFORM6 We now specify our perturbation scheme. Recalling the orthonormal matrix S defined in eq.. Then, we consider the following matrices for some 1, 2 > 0 DISPLAYFORM7 For this purpose, we need to utilize the condition of critical points in eq., which can be equivalently expressed as DISPLAYFORM8 (ii) ⇔ (CL 1) (rank(A2)+1):d1,: XY (I − U S :,1:(q−1) (U S :,1:(q−1) ) ) = 0where (i) follows by taking the transpose and then simplifying, and (ii) uses the fact that V = SS V = S:,1:(q−1) in the case of optimal-order critical point. Calculating the function value at (A 1, A 2), we obtain that DISPLAYFORM9. We next simplify the above three trace terms using eq.. For the first trace term, observe that DISPLAYFORM10 2 Tr(S :,q ΛS :,q) where (i) follows from eq. as S:,q is orthogonal to the columns of S:,1:(q−1). For the second trace term, we obtain that DISPLAYFORM11 = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY U V diag (U V diag) ) + 2Tr(1 2 U S :,q S :,q ΛSS V diag (U V diag) ) (i) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY U V diag (U V diag) ) + 2Tr(1 2 σ k U S :,q e q S V diag (U V diag) )(ii) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY U V diag (U V diag) ), where (i) follows from S:,q ΛS = σ k e q, and (ii) follows from e q S V diag = 0. For the third trace term, we obtain that 2Tr(P Y) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY ) + 2Tr(1 2 U S :,q (U S :,q) Σ) = 2Tr(2 U S :,q (CL 1) (rank(A2)+1),: XY ) + 2Tr(1 2 S :,q ΛS :,q).Combining the expressions for the three trace terms above, we conclude that Consider a critical point (A 1, . . ., A) so that eq. FORMULA4 Observe that the product matrix A (,2) is equivalent to the class of matrices B 2 ∈ R min{d,...,d2}×d1.Consider a critical point (B 2, A 1) of the shallow linear network L:= The proof is similar to that for shallow linear networks. Consider a deep-non-optimal-order critical point (A 1, . . ., A), and define the orthonormal block matrix S k using the blocks of V k in a similar way as eq.. Then, A (l,k+2) takes the form A (l,k+2) = U k S k S k V k C k. Since A (l,k+2) is of non-optimal order, there exists i < j < r(k) such that p i (k) < m i (k) and p j (k) > 0. Thus, we perturb the j-th column of U k S k to be, and denote the ing matrix as M k.Then, we perturb A to be A = M k (U k S k) A so that A A (−1,k+2) = M k S k V k C k. Moreover, we generate A k+1 by eq. with U k ← M k, V k ← S k V k. Note that such construction satisfies eq., and hence also satisfies eq., which further yields that DISPLAYFORM0 With the above equation, the function value at this perturbed point is evaluated as DISPLAYFORM1 Then, a careful calculation shows that only the i, j-th diagonal elements of DISPLAYFORM2 have changed, and are Now consider a deep-optimal-order critical point (A 1, . . ., A). Note that with A (−2,1) fixed to be a constant, the deep linear network reduces to a shallow linear network with parameters (A, A −1). Since (A, A −1) is not a non-global minimum critical point of this shallow linear network and A is of optimal-order, we can apply the perturbation scheme in the proof of Proposition 3 to identify a perturbation (A, A −1) with rank(A) = rank(A) + 1 that achieves a lower function value. Consider any point in X D. Since A (,1) X = 0, we can scale the nonzero row, say, the i-th row (A) i,: A (−1,1) X properly in the same way as that in the proof of Proposition 3 to increase the function value. Lastly, item 1 and item 2 imply that every local minimum is a global minimum for these two types of critical points. Moreover, combining items 1,2 and 3, we conclude that every critical point of these two types in X D is a saddle point.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SysEexbRb
We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks.
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” , two biologically-plausible algorithms, proposed by and , relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry (SS) algorithm , which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs. We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These complement the study by and establish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures. Deep learning models today are highly successful in task performance, learning useful representations, and even matching representations in the brain BID26 BID24. However, it remains a contentious issue whether these models reflect how the brain learns. Core to the problem is the fact that backpropagation, the learning algorithm underlying most of today's deep networks, is difficult to implement in the brain given what we know about the brain's hardware BID2 however, see Hinton 2007). One main reason why backpropagation seems implausible in the brain is that it requires sharing of feedforward and feedback weights. Since synapses are unidirectional in the brain, feedforward and feedback connections are physically distinct. Requiring them to shared their weights, even as weights are adjusted during learning, seems highly implausible. One approach to addressing this issue is to relax the requirement for weight-symmetry in error backpropagation. Surprisingly, when the feedback weights share only the sign but not the magnitude of the feedforward weights BID16 or even when the feedback weights are random (but fixed) BID17, they can still guide useful learning in the network, with performance comparable to and sometimes even better than performance of backpropagation, on datasets such as MNIST and CIFAR. Here, we refer to these two algorithms, respectively, as "sign-symmetry" and "feedback alignment." Since weight symmetry in backpropagation is required for accurately propagating the derivative of the loss function through layers, the success of asymmetric feedback algorithms indicates that learning can be supported even by inaccurate estimation of the error derivative. In feedback alignment, the authors propose that the feedforward weights learn to align with the random feedback weights, thereby allowing feedback to provide approximate yet useful learning signals BID17.However, a recent paper by BID0 finds that feedback alignment and a few other biologically-plausible algorithms, including variants of target propagation, do not generalize to larger and more difficult problems such as ImageNet BID4 ) and perform much worse than backpropagation. Nevertheless, the specific conditions Bartunov et al. tested are somewhat restrictive. They only tested locally-connected networks (i.e., weight sharing is not allowed among convolution filters at different spatial locations), a choice that is motivated by biological plausibility but in practice limits the size of the network (without weight sharing, each convolutional layer needs much more memory to store its weights), making it unclear whether poor performance was attributable solely to the algorithm, or to the algorithm on those architectures.1 Second, Bartunov et al. did not test sign-symmetry, which may be more powerful than feedback alignment since signsymmetric feedback weights may carry more information about the feedforward weights than the random feedback weights used in feedback alignment. In this work, we re-examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using standard ConvNet architectures (i.e., ResNet-18, AlexNet, and RetinaNet). We find that sign-symmetry can in fact train networks on both tasks, achieving similar performance to backpropagation on ImageNet and reasonable performance on MS COCO. In addition, we test the use of backpropagation exclusively in the last layer while otherwise using feedback alignment, hypothesizing that in the brain, the classifier layer may not be a fully-connected layer and may deliver the error signal through some other unspecified mechanism. Such partial feedback alignment can achieve better performance (relative to backpropagation) than in BID0. Taken together, these extend previous findings and indicate that existing biologicallyplausible learning algorithms remain viable options both for training artificial neural networks and for modeling how learning can occur in the brain. Consider a layer in a feedforward neural network. Let x i denote the input to the i th neuron in the layer and y j the output of the j th neuron. Let W denote the feedforward weight matrix and W ij the connection between input x i and output y j. Let f denote the activation function. Then, Equation 1 describes the computation in the feedforward step. Now, let B denote the feedback weight matrix and B ij the feedback connection between output y j and input x i, and let f denote the derivative of the activation function f. Given the objective function E, the error gradient ∂E ∂xi calculated in the feedback step is described by Equation 2. DISPLAYFORM0 DISPLAYFORM1 Standard backpropagation requires B = W. Sign-symmetry BID16 relaxes the above symmetry requirement by letting B = sign(W), where sign(·) is the (elementwise) sign function. Feedback alignment BID17 uses a fixed random matrix as the feedback weight matrix B. Lillicrap et al. showed that through training, W is adjusted such that on average, e T W Be > 0, where e is the error in the network's output. This condition implies that the error correction signal Be lies within 90• of e T W, the error calculated by standard backpropagation. We implement both algorithms in PyTorch for convolutional and fully-connected layers and post the code at https://github.com/willwx/sign-symmetry. We trained ResNet-18 BID6 on ImageNet using 5 different training settings: 1) backpropagation; 2) sign-symmetry for convolutional layers and backpropagation for the last, fully-connected layer; 3) sign-symmetry for all (convolutional and fully-connected) layers; 4) feedback alignment for convolutional layers and backpropagation for the fully-connected layer; and 5) feedback alignment for all (convolutional and fully-connected) layers. In sign-symmetry, at each backward step, feedback weights were taken as the signs of the feedforward weights, scaled by the same scale λ used to initialize that layer. 2 In feedback alignment, feedback weights were initialized once at the beginning as random variables from the same distribution used to initialize that layer. For backpropagation, standard training parameters were used (SGD with learning rate 0.1, momentum 0.9, and weight decay 10 −4). For ResNet-18 with other learning algorithms, we used SGD with learning rate 0.05 3, while momentum and weight decay remain unchanged. For AlexNet with all learning algorithms, standard training parameters were used (SGD with learning rate 0.01, momentum 0.9, and weight decay 5 × 10 −4). We used a version of AlexNet BID13, as used in torchvision) which we slightly modified to add batch normalization BID9 before every nonlinearity and consequently removed dropout. For all experiments, we used a batch size of 256, a learning rate decay of 10-fold every 10 epochs, and trained for 50 epochs. BID12. Sign-symmetry performed nearly as well as backpropagation, while feedback alignment performed better than previously reported when backpropagation was used to train the last layer. In all cases, the network was able to learn (FIG0, TAB0). Remarkably, sign-symmetry only slightly underperformed backpropagation in this benchmark large dataset, despite the fact that signsymmetry does not accurately propagate either the magnitude or the sign of the error gradient. Hence, this is not predicted by the performance of signSGD BID1, where weight updates use the sign of the gradients, but gradients are still calculate accurately; or XNORNet BID22, where both feedforward and feedback weights are binary but symmetrical, so error backpropagation is still accurate. An intuitive explanation for this performance is that the skip-connections in ResNet help prevent the degradation of the gradient being passed through many layers of sign-symmetric feedback. However, sign-symmetry also performed similarly well to backpropagation in a (modified) AlexNet architecture, which did not contain skip connections. Therefore, skip-connections alone do not explain the performance of sign-symmetry. In addition, although its performance was considerably worse, feedback alignment was still able to guide better learning in the network than reported by BID0 Figure 3 ) if we use backpropagation in the last layer. This condition is not unreasonable since, in the brain, the classifier layer is likely not a soft-max classifier and may deliver error signals by a different mechanism. We also tested using backpropagation exclusively for the last layer in a network otherwise trained with sign-symmetry, but the effect on the performance was minimal. One possibility why sign-symmetry performed better than feedback alignment is that in sign-symmetry, the feedback weight always tracks the sign of the feedforward weight, which may reduce the burden on the feedforward weight to learn to align with the feedback weight. Finally, in BID16, Batch-Manhattan (BM) SGD was proposed as a way to stabilize training with asymmetric feedback algorithms. In our experience, standard SGD consistently worked better than BM for sign-symmetry, but BM may improve for feedback alignment. We have not comprehensively characterized the effects of BM since many factors like learning rate can affect the outcome. Future experiments are needed to draw stronger . Besides the ImageNet classification task, we examined the performance of sign-symmetry on the MS COCO object detection task. Object detection is more complex than classification and might therefore require more complicated network architecture in order to achieve high accuracy. Thus, in this experiment we assessed the effectiveness of sign-symmetry in training networks that were more complicated and difficult to optimize. We trained the state-of-the-art object detection network RetinaNet proposed by BID18 on the COCO trainval35k split, which consists of 80k images from train and 35k random images from the 40k-image val set. RetinaNet comprises a ResNet-FPN backbone, a classification subnet, and a bounding box regressing subnet. The network was trained with three different training settings: 1) backpropagation for all layers; 2) backpropagation for the last layer in both subnets and sign-symmetry for rest of the layers; 3) backpropagation for the last layer in both subnets and feedback alignment for rest of the layers. We used a backbone ResNet-18 pretrained on ImageNet to initialize the network. In all the experiments, the network was trained with SGD with an initial learning rate of 0.01, momentum of 0.9, and weight decay of 0.0001. We trained the network for 40k iterations with 8 images in each minibatch. The learning rate was divided by 10 at iteration 20k. The on COCO are similar to those on ImageNet, although the performance gap between SS and BP on COCO is slightly more prominent FIG1. A number of factors could have potentially contributed to this . We followed the Feature Pyramid Network (FPN) architecture design choices, optimizers, and hyperparameters reported by BID18; these choices are all optimized for use with backpropagation instead of sign-symmetry. Hence, the here represent a lowerbound on the performance of sign-symmetry for training networks on the COCO dataset. We ran a number of analyses to understand how sign-symmetry guides learning. BID17 show that with feedback alignment, the alignment angles between feedforward and feedback weights gradually decrease because the feedforward weights learn to align with the feedback weights. We asked whether the same happens in sign-symmetry by computing alignment angles as in BID17: For every pair of feedforward and feedback weight matrices, we flattened the matrices into vectors and computed the angle between the vectors. Interestingly, we found that during training, the alignment angles decreased for the last 3 layers but increased for the other layers (Figure 3a). In comparison, in the backpropagation-trained network (where sign(W) was not used in any way), the analogous alignment angle between W and sign(W) increased for all layers. One possible explanation for the increasing trend is that as the training progresses, the feedforward weights tend to become sparse. Geometrically, this means that feedforward vectors become more aligned to the standard basis vectors and less aligned with the feedback weight vectors, which always lie on a diagonal by construction. This explanation is consistent with the similarly increasing trend of the average kurtosis of the feedforward weights (Figure 3b), which indicates that values of the weights became more dispersed during training. Since the magnitudes of the feedforward weights were discarded when calculating the error gradients, we also looked at how sign-symmetry affected the size of the trained weights. Sign-symmetry and backpropagation ed in weights with similar magnitudes (Figure 3c). More work is needed to elucidate how sign-symmetry guides efficient learning in the network. Our indicate that biologically-plausible learning algorithms, specifically sign-symmetry and feedback alignment, are able to learn on ImageNet. This finding seemingly conflicts with the findings by BID0. Why do we come to such different ?First, Bartunov et al. did not test sign-symmetry, which is expected to be more powerful than feedback alignment, because it is a special case of feedback alignment that allows feedback weights to have additional information about feedforward weights. Indeed, on ImageNet, the performance of sign-symmetry approached that of backpropagation and exceeded the performance of feedback alignment by a wide margin. Another reason may be that instead of using standard ConvNets on ImageNet, Bartunov et al. only tested locally-connected networks. While the later is a more biologically plausible architecture, in practice, it is limited in size by the need to store separate weights Figure 3: a, During training with sign-symmetry, alignment angles between feedforward weights W and feedback weights sign(W) decreased in the last 3 layers but increased in early layers, whereas during training with backpropagation, the analogous alignment angles increased for all layers and were overall larger. b, Kurtosis of the feedforward weight matrices increased during training. c, The magnitudes of weights trained by sign-symmetry were similar to those trained by backpropagation. Line and shading, mean ± std for epoch 50.for each spatial location. This reduced model capacity creates a bottleneck that may affect the performance of feedback alignment (see , Supplementary Note 9). Finally, the performance of feedback alignment also benefited from the use of backpropagation in the last layer in our conditions. A major reason why backpropagation is considered implausible in the brain is that it requires exact symmetry of physically distinct feedforward and feedback pathways. Sign-symmetry and feedback alignment address this problem by relaxing this tight coupling of weights between separate pathways. Feedback alignment requires no relation at all between feedforward and feedback weights and simply depends on learning to align the two. Hence, it can be easily realized in the brain (for example, see , Supplementary Figure 3). However, empirically, we and others have found its performance to be not ideal on relatively challenging problems. Sign-symmetry, on the other hand, introduces a mild constraint that feedforward and feedback connections be "antiparallel": They need to have opposite directions but consistent signs. This can be achieved in the brain with two additional yet plausible conditions: First, the feedforward and feedback pathways must be specifically wired in this antiparallel way. This can be achieved by using chemical signals to guide specific targeting of axons, similar to how known mechanisms for specific wiring operate in the brain BID20 BID8. One example scheme of how this can be achieved is shown in Figure 4. While the picture in Figure 4a is complex, most of the complexity comes from the fact that units in a ConvNet produce inconsistent outputs (i.e., both positive and negative). If the units are consistent (i.e., producing exclusively positive or negative outputs), the picture simplifies to Figure 4b. Neurons in the brain are observed to be consistent, as stated by the so-called "Dale's Law" BID3 BID25. Hence, this constraint would have to be incorporated at some point in any biologically plausible network, and remains an important direction for future work. We want to remark that Figure 4 is meant to indicate the relative ease of wiring sign-symmetry in the brain (compared to, e.g., wiring a network capable of weight transport), not that the brain is known to be wired this way. Nevertheless, it represents a hypothesis that is falsifiable by experimental data, potentially in the near future. Related, a second desideratum is that weights should not change sign during training. While our current setting for sign-symmetry removes weight magnitude transport, it still implicitly relies on "sign transport." However, in the brain, the sign of a connection weight depends on the type of the 4 A paper from last year examined connectivity patterns within tissue sizes of approx. 500 microns and axon lengths of approx. 250 microns BID23; recent progress (fueled by deep learning) can trace axons longer than 1 mm, although the imaging of large brain volumes is still limiting. In comparison, in mice, adjacent visual areas (corresponding to stages of visual processing) are 0.5-several mms apart BID19, while in primates it is tens of millimeters. Thus, testing the reality of sign-symmetric wiring is not quite possible today but potentially soon to be. presynaptic neuron-e.g., glutamatergic (excitatory) or GABAergic (inhibitory)-a quality that is intrinsic to and stable for each neuron given existing evidence. Hence, if sign-symmetry is satisfied initially-for example, through specific wiring as just described-it will be satisfied throughout learning, and "sign transport" will not be required. Thus, evaluating the capacity of sign-fixed networks to learn is another direction for future work. Figure 4: The specific wiring required for sign-symmetric feedback can be achieved using axonal guidance by specific receptor-ligand recognition. Assume that an axon carrying ligand L X will only synapse onto a downstream neuron carrying the corresponding receptor R X. By expressing receptors and ligands in an appropriate pattern, an antiparallel wiring pattern can be established that supports sign-symmetric feedback. a, An example scheme. In this scheme, one inconsistent unit (i.e., a unit that produce both positive and negative outputs) in the network is implemented by three consistent biological neurons, so that each synapse is exclusively positive or negative. n input neurons orthogonal ligand-receptor pairs is sufficient to implement all possible connection patterns. b, An example scheme for implementing a signsymmetric network with consistent units. Only 2 orthogonal ligand-receptor pairs are needed to implement all possible connectivities in this case. These schemes represent falsifiable hypotheses, although they do not exclude other possible implementations. Another element of unclear biological reality, common to feedback alignment and sign-symmetry, is that the update of a synaptic connection (i.e., weight) between two feedforward neurons (A to B) depends on the activity in a third, feedback neuron C, whose activation represents the error of neuron B. One way it can be implemented biologically is for neuron C to connect to B with a constant and fixed weight. When C changes its value due to error feedback, it will directly induce a change of B's electric potential and thus of the postsynaptic potential of the synapse between A and B, which might lead to either Long-term Potentiation (LTP) or Long-term Depression (LTD) of synapse A-B.Biological plausibility of ResNet has been previously discussed by BID14, claiming that ResNet corresponds to an unrolled recurrent network in the visual cortex. However, it is unclear yet how backpropagation through time can be implemented in the brain. Biological plausibility of batch normalization has been discussed in BID15, where they addressed the issues with online learning (i.e., one sample at a time, instead of minibatch), recurrent architecture and consistent training and testing normalization statistics. Other biological constraints include removing weight-sharing in convolutional layers as in BID0, incorporating temporal dynamics as in BID17, using realistic spiking neurons, addressing the sample inefficiency general to deep learning, etc. We believe that these are important yet independent issues to the problem of weight transport and that by removing the latter, we have taken a meaningful step toward biological plausibility. Nevertheless, many steps remain in the quest for a truly plausible, effective, and empirically-verified model of learning in the brain. Recent work shows that biologically-plausible learning algorithms do not scale to challenging problems such as ImageNet. We evaluated sign-symmetry and re-evaluated feedback alignment on their effectiveness training ResNet and AlexNet on ImageNet and RetinaNet on MS COCO. We find that 1) sign-symmetry performed nearly as well as backpropagation on ImageNet, 2) slightly modified feedback alignment performed better than previously reported, and 3) both algorithms had reasonable performance on MS COCO with minimal hyperparameter tuning. Taken together, these indicate that biologically-plausible learning algorithms, in particular sign-symmetry, remain promising options for training artificial neural networks and modeling learning in the brain.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SygvZ209F7
Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet
We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning. Deep learning contains many differentiable algorithms for computing with learned representations. These representations form vector spaces, sometimes equipped with additional structure. A recent example is the Transformer in which there is a vector space V of value vectors and an inner product space H of query and key vectors. This structure supports a kind of messagepassing, where a value vector v j ∈ V derived from entity j is propagated to update an entity i with weight q i · k j, where q i ∈ H is a query vector derived from entity i, k j ∈ H is a key vector derived from entity j, and the inner product on H is written as a dot product. The Transformer therefore represents a relational inductive bias, where a relation from entity j to entity i is perceived to the extent that q i · k j is large and positive. However, the real world has structure beyond entities and their direct relationships: for example, the three blocks in Figure 1 are arranged in such a way that if either of the supporting blocks is removed, the top block will fall. This is a simple 3-way relationship between entities i, j, k that is complex to represent as a system of 2-way relationships. It is natural to make the hypothesis that such higher-order relationships are essential to extracting the full predictive power of data, across many domains. In accordance with this hypothesis, we introduce a generalisation of the Transformer architecture, the 2-simplicial Transformer, which incorporates both 2-and 3-way interactions. Mathematically, the key observation is that higher-order interactions between entities can be understood using algebras. This is nothing but Boole's insight (Boole, 1847) which set in motion the development of modern logic. In our situation, an appropriate algebra is the Clifford algebra Cl(H) of the space H of queries and keys, which contains that space H ⊆ Cl(H) and in which queries and keys can be multiplied. To represent a 3-way interaction we map each entity i to a triple (p i, l k) using a natural continuous function η: Cl(H) −→ R associated to the Z-grading of Cl(H). This scalar measures how strongly the network perceives a 3-way interaction involving i, j, k. In summary, the 2-simplicial Transformer learns how to represent entities in its environment as vectors v ∈ V, and how to transform those entities to queries and (pairs of) keys in H, so that the signals provided by the scalars q i · k j and η(p i l 1 j l 2 k) are informative about higher-order structure in the environment. As a toy example of higher-order structure, we consider the reinforcement learning problem in a variant of the BoxWorld environment from . The original BoxWorld is played on a rectangular grid populated by keys and locked boxes of varying colours, with the goal being to open the box containing the "Gem". In our variant of the BoxWorld environment, bridge BoxWorld, the agent must use two keys simultaneously to obtain the Gem; this structure in the environment creates many 3-way relationships between entities, including for example the relationship between the locked boxes j, k providing the two keys and the Gem entity i. This structure in the environment is fundamentally logical in nature, and encodes a particular kind of conjunction; see Appendix I. The architecture of our deep reinforcement learning agent largely follows and the details are given in Section 4. The key difference between our simplicial agent and the relational agent of is that in place of a standard Transformer block we use a 2-simplicial Transformer block. Our experiments show that the simplicial agent confers an advantage over the relational agent as an inductive bias in our reasoning task. Motivation from neuroscience for a simplicial inductive bias for abstract reasoning is contained in Appendix J. Our use of tensor products of value vectors is inspired by the semantics of linear logic in vector spaces (; ; ;) in which an algorithm with multiple inputs computes on the tensor product of those inputs, but this is an old idea in natural language processing, used in models including the second-order RNN (; ; ;), multiplicative RNN , Neural Tensor Network and the factored 3-way Restricted Boltzmann Machine , see Appendix A. Tensors have been used to model predicates in a number of neural network architectures aimed at logical reasoning . The main novelty in our model lies in the introduction of the 2-simplicial attention, which allows these ideas to be incorporated into the Transformer architecture. In this section we first review the definition of the ordinary Transformer block and then explain the 2-simplicial Transformer block. We distinguish between the Transformer architecture which contains a word embedding layer, an encoder and a decoder , and the Transformer block which is the sub-model of the encoder that is repeated. The fundamental idea, of propagating information between nodes using weights that depend on the dot product of vectors associated to those nodes, comes ultimately from statistical mechanics via the Hopfield network (Appendix B). The ordinary and 2-simplicial Transformer blocks define operators on sequences e 1,..., e N of entity representations. Strictly speaking the entities are indices 1 ≤ i ≤ N but we sometimes identify the entity i with its representation e i. The space of entity representations is denoted V, while the space of query, key and value vectors is denoted H. We use only the vector space structure on V, but H = R d is an inner product space with the usual dot product pairing (h, h) → h · h and in defining the 2-simplicial Transformer block we will use additional algebraic structure on H, including the "multiplication" tensor B: H ⊗ H −→ H of (used to propagate tensor products of value vectors) and the Clifford algebra of H (used to define the 2-simplicial attention). In the first step of the standard Transformer block we generate from each entity e i a tuple of vectors via a learned linear transformation E: V −→ H ⊕3. These vectors are referred to respectively as query, key and value vectors and we write Stated differently, In the second step we compute a refined value vector for each entity Finally, the new entity representation e i is computed by the application of a feedforward network g θ, layer normalisation and a skip connection Remark 2.1. In the introduction we referred to the idea that a Transformer model learns representations of relations. To be more precise, these representations are heads, each of which determines an independent set of transformations W Q, W K, W V which extract queries, keys and values from entities. Thus a head determines not only which entities are related (via W Q, W K) but also what information to transmit between them (via W V). In multiple-head attention with K heads, there are K channels along which to propagate information between every pair of entities, each of dimension dim(H)/K. More precisely, we choose a decomposition H = H 1 ⊕ · · · ⊕ H K so that and write To compute the output of the attention, we take a direct sum of the value vectors propagated along every one of these K channels, as in the formula In combinatorial topology the canonical one-dimensional object is the 1-simplex (or edge) j −→ i. Since the standard Transformer model learns representations of relations, we refer to this form of attention as 1-simplicial attention. The canonical two-dimensional object is the 2-simplex (or triangle) which we may represent diagrammatically in terms of indices i, j, k as In the 2-simplicial Transformer block, in addition to the 1-simplicial contribution, each entity e i is updated as a function of pairs of entities e j, e k using the tensor product of value vectors u j ⊗ u k and a probability distribution derived from a scalar triple product p i, l 1 j, l 2 k in place of the scalar product q i · k j. This means that we associate to each entity e i a four-tuple of vectors via a learned linear transformation E: V −→ H ⊕4, denoted We still refer to p i as the query, l 1 i, l 2 i as the keys and u i as the value. Stated differently, whose square is a polynomial in the pairwise dot products This scalar triple product has a simple geometric interpretation in terms of the volume of the tetrahedron with vertices 0, a, b, c. To explain, recall that the triangle spanned by two unit vectors a, b in R 2 has an area A which can be written in terms of the dot product of a and b. In three dimensions, the analogous formula involves the volume V of the tetrahedron with vertices given by unit vectors a, b, c, and the scalar triple product as shown in Figure 2. In general, given nonzero vectors a, b, c letâ,b,ĉ denote unit vectors in the same directions. Then we can by Lemma C.10(v) factor out the length in the scalar triple product Figure 2: The geometry of 1-and 2-simplicial attention. Left: the dot product in terms of the area A in R 2. Right: the triple product in terms of the volume V in R 3. so that a general scalar triple product can be understood in terms of the vector norms and configurations of three points on the 2-sphere. One standard approach to calculating volumes of such tetrahedrons is the cross product which is only defined in three dimensions. Since the space of representations H is high dimensional the natural framework for the triple scalar product a, b, c is instead the Clifford algebra of H (see Appendix C). For present purposes, we need to know that a, b, c attains its minimum value (which is zero) when a, b, c are pairwise orthogonal, and attains its maximum value (which is a b c) if and only if {a, b, c} is linearly dependent (Lemma C.10). Using the number p i, l k as a measure of the degree to which entity i is attending to (j, k), or put differently, the degree to which the network predicts the existence of a 2-simplex (i, j, k), the update rule for the entities when using purely 2-simplicial attention is where B: H ⊗ H −→ H is a learned linear transformation. Although we do not impose any further constraints, the motivation here is to equip H with the structure of an algebra; in this respect we model conjunction by multiplication, an idea going back to Boole (Boole, 1847). We compute multiple-head 2-simplicial attention in the same way as in the 1-simplicial case. To combine 1-simplicial heads (that is, ordinary Transformer heads) and 2-simplicial heads we use separate inner product spaces H 1, H 2 for each simplicial dimension, so that there are learned linear transformations ⊕4 and the queries, keys and values are extracted from an entity e i according to The update rule (for a single head in each simplicial dimension) is then: If there are K 1 heads of 1-simplicial attention and K 2 heads of 2-simplicial attention, then is modified in the obvious way using Without the additional layer normalisation on the output of the 2-simplicial attention we find that training is unstable. The natural explanation is that these outputs are constructed from polynomials of higher degree than the 1-simplicial attention, and thus computational paths that go through the 2-simplicial attention will be more vulnerable to exploding or vanishing gradients. The time complexity of 1-simplicial attention as a function of the number of entities is O(N 2) while the time complexity of 2-simplicial attention is O(N 3) since we have to calculate the attention for every triple (i, j, k) of entities. For this reason we consider only triples (i, j, k) where the base of the 2-simplex (j, k) is taken from a set of pairs predicted by the ordinary attention, which we view as the primary locus of computation. More precisely, we introduce in addition to the N entities (now referred to as standard entities) a set of M virtual entities e N +1,..., e N +M. These virtual entities serve as a "scratch pad" onto which the iterated ordinary attention can write representations, and we restrict j, k to lie in the range N < j, k ≤ N + M so that only value vectors obtained from virtual entities are propagated by the 2-simplicial attention. With virtual entities the update rule is for and for The updated representation e i is computed from v i, e i using as before. Observe that the virtual entities are not used to update the standard entities during 1-simplicial attention and the 2-simplicial attention is not used to update the virtual entities; instead the second summand in involves the vector u i = W U e i, which adds recurrence to the update of the virtual entities. After the attention phase the virtual entities are discarded. The method for updating the virtual entities is similar to the role of the memory nodes in the relational recurrent architecture of , the master node in (, §5.2) and memory slots in the Neural Turing Machine . The update rule has complexity O(N M 2) and so if we take M to be of order √ N we get the desired complexity O(N 2). The environment in our reinforcement learning problem is a variant of the BoxWorld environment from . The standard BoxWorld environment is a rectangular grid in which are situated the player (a dark gray tile) and a number of locked boxes represented by a pair of horizontally adjacent tiles with a tile of colour x, the key colour, on the left and a tile of colour y, the lock colour, on the right. There is also one loose key in each episode, which is a coloured tile not initially adjacent to any other coloured tile. All other tiles are blank (light gray) and are traversable by the player. The rightmost column of the screen is the inventory, which fills from the top and contains keys that have been collected by the player. The player can pick up any loose key by walking over it. In order to open a locked box, with key and lock colours x, y, the player must step on the lock while in possession of a copy of y, in which case one copy of this key is removed from the inventory and replaced by a key of colour x. The goal is to attain a white key, referred to as the Gem (represented by a white square) as shown in the sample episode of Figure 3. In this episode, there is a loose pink key (marked 1) which can be used to open one of two locked boxes, obtaining in this way either key 5 or key 2 1. The correct choice is 2, since this leads via the sequence of keys 3, 4 to the Gem. Some locked boxes, if opened, provide keys that are not useful for attaining the Gem. Since each key may only be used once, opening such boxes means the episode is rendered unsolvable. Such boxes are called distractors. An episode ends when the player either obtains the Gem (with a reward of +10) or opens a distractor box (reward −1). Opening any non-distractor box, or picking up a loose key, garners a reward of +1. The solution length is the number of locked boxes (including the one with the Gem) in the episode on the path from the loose key to the Gem. Our variant of the BoxWorld environment, bridge BoxWorld, is shown in Figure 4. In each episode two keys are now required to obtain the Gem, and there are therefore two loose keys on the board. To obtain the Gem, the player must step on either of the lock tiles with both keys in the inventory, at which point the episode ends with the usual +10 reward. Graphically, Gems with multiple locks are denoted with two vertical white tiles on the left, and the two lock tiles on the right. Two solution 1 The agent sees only the colours of tiles, not the numbers which are added here for exposition. paths (of the same length) leading to each of the locks on the Gem are generated with no overlapping colours, beginning with two loose keys. In episodes with multiple locks we do not consider distractor boxes of the old kind; instead there is a new type of distractor that we call a bridge. This is a locked box whose lock colour is taken from one solution branch and whose key colour is taken from the other branch. Opening the bridge renders the puzzle unsolvable. An episode ends when the player either obtains the Gem (reward +10) or opens the bridge (reward −1). Opening a box other than the bridge, or picking up a loose key, has a reward of +1 as before. In this paper we consider episodes with zero or one bridge (the player cannot fail to solve an episode with no bridge). Standard BoxWorld is straightforward for an agent to solve using relational reasoning, because leaves on the solution graph can be identified (their key colour appears only once on the board) and by propagating this information backwards along the arrows on the solution graph, an agent can identify distractors. Bridge BoxWorld emphasises reasoning about 3-way relationships (or 2-simplices). The following 2-simplex motifs appear in all solution graphs where a pair of boxes (α, β) is a source if they have the same lock colour but distinct key colours, and a sink if they have the same key colour but distinct lock colours (the 2-simplex leading to the Gem being an example). If α, β is a source or a sink then either α is the bridge or β is the bridge. If the agent can observe both a source and a sink then it can locate the bridge. It is less clear how to identify bridges using iterated relational reasoning, because every path in the solution graph eventually reaches the Gem. Our baseline relational agent is modeled closely on except that we found that a different arrangement of layer normalisations worked better in our experiments, see Remark 4.1. The code for our implementation of both agents is available online . In the following we describe the network architecture of both the relational and simplicial agent; we will note the differences between the two models as they arise. The input to the agent's network is an RGB image, represented as a tensor of shape where R is the number of rows and C the number of columns (the C + 1 is due to the inventory). This tensor is divided by 255 and then passed through a 2 × 2 convolutional layer with 12 features, and then a 2 × 2 convolutional layer with 24 features. Both activation functions are ReLU and the padding on our convolutional layers is "valid" so that the output has shape [R − 2, C − 1, 24]. We then multiply by a weight matrix of shape 24 × 62 to obtain a tensor of shape [R − 2, C − 1, 62]. Each feature vector has concatenated to it a twodimensional positional encoding, and then the is reshaped into a tensor of shape [N, 64] where N = (R − 2)(C − 1) is the number of Transformer entities. This is the list (e 1, . . ., e N) of entity representations e i ∈ V = R 64. In the case of the simplicial agent, a further two learned embedding vectors e N +1, e N +2 are added to this list; these are the virtual entities. So with M = 0 in the case of the relational agent and M = 2 for the simplicial agent, the entity representations form a tensor of shape [N + M, 64]. This tensor is then passed through two iterations of the Transformer block (either purely 1-simplicial in the case of the relational agent, or including both 1 and 2-simplicial attention in the case of the simplicial agent). In the case of the simplicial agent the virtual entities are then discarded, so that in both cases we have a sequence of entities e 1,..., e N. Inside each block are two feedforward layers separated by a ReLU activation with 64 hidden nodes; the weights are shared between iterations of the Transformer block. In the 2-simplicial Transformer block the input tensor, after layer normalisation, is passed through the 2-simplicial attention and the (after an additional layer normalisation) is concatenated to the output of the 1-simplicial attention heads before being passed through the feedforward layers. The pseudo-code for the ordinary and 2-simplicial Transformer blocks are: d e f t r a n s f o r m e r b l o c k (e): x = LayerNorm (e) a = 1 S i m p l i c i a l A t t e n t i o n (x) b = D e n s e L a y e r 1 (a) c = D e n s e L a y e r 2 (b) r = Add ([ e, c] ) e p r i m e = LayerNorm (r) r e t u r n e p r i m e d e f s i m p l i c i a l t r a n s f o r m e r b l o c k (e): x = LayerNorm (e) a1 = 1 S i m p l i c i a l A t t e n t i o n (x) a2 = 2 S i m p l i c i a l A t t e n t i o n (x) a2n = LayerNorm (a2) a c = C o n c a t e n a t e ([ a1, a2n] ) b = D e n s e L a y e r 1 (a c) c = D e n s e L a y e r 2 (b) r = Add ([ e, c] ) e p r i m e = LayerNorm (r) r e t u r n e p r i m e Our implementation of the standard Transformer block is based on an implementation in Keras from . In both the relational and simplicial agent, the space V of entity representations has dimension 64 and we denote by H 1, H 2 the spaces of 1-simplicial and 2-simplicial queries, keys and values. In both the relational and simplicial agent there are two heads of 1-simplicial attention, In the simplicial agent there is a single head of 2-simplicial attention with dim(H 2) = 48 and two virtual entities. The output of the Transformer blocks is a tensor of shape [N, 64]. To this final entity tensor we apply max-pooling over the entity dimension, that is, we compute a vector v ∈ R 64 by the rule v i = max 1≤j≤N (e j) i for 1 ≤ i ≤ 64. This vector v is then passed through four fully-connected layers with 256 hidden nodes and ReLU activations. The output of the final fully-connected layer is multiplied by one 256 × 4 weight matrix to produce logits for the actions (left, up, right and down) and another 256 × 1 weight matrix to produce the value function. Remark 4.1. There is wide variation in the layer normalisation in Transformer models, compare (; ;). In layer normalisation occurs in two places: on the concatenation of the Q, K, V matrices, and on the output of the feedforward network g θ. We keep this second normalisation but move the first from after the linear transformation E of to before this linear transformation, so that it is applied directly to the incoming entity representations. This ordering gave the best performant relational model in our experiments, with our diverging even further if a direct comparison to the architecture was used. The training of our agents uses the implementation in Ray RLlib of the distributed off-policy actor-critic architecture IMPALA of with optimisation algorithm RMSProp. The hyperparameters for IMPALA and RMSProp are given in Table 1 of Appendix E. Following and other recent work in deep reinforcement learning, we use RMSProp with a large value of the hyperparameter ε = 0.1. As we explain in Appendix G, this is effectively RMSProp with smoothed gradient clipping. First we verified that our implementation of the relational agent solves the BoxWorld environment with a solution length sampled from and number of distractors sampled from on a 9 × 9 grid. After training for 2.35 × 10 9 timesteps our implementation solved over 93% of puzzles (regarding the discrepancy with the reported sample complexity in see Appendix D). Next we trained the relational and simplicial agent on bridge BoxWorld, under the following conditions: half of the episodes contain a bridge, the solution length is uniformly sampled from (both solution paths are of the same length), colours are uniformly sampled from a set of 20 colours and the boxes and loose keys are arranged randomly on a 7 × 9 grid, under the constraint that the box containing the Gem does not occur in the rightmost column or bottom row, and keys appear only in positions (y, x) = (2r, 3c − 1) for 1 ≤ r ≤ 3, 1 ≤ c ≤ 3. The starting and ending point of the bridge are uniformly sampled with no restrictions (e.g. the bridge can involve the colours of the loose keys and locks on the Gem) but the lock colour is always on the top solution path. There is no curriculum and no cap on timesteps per episode. We trained four independent trials of both agents to either 5.5 × 10 9 timesteps or convergence, whichever came first. In Figure 6 we give the mean and standard deviation of these four trials, showing a clear advantage of the simplicial agent. We make some remarks about performance comparisons taking into account the fact that the relational agent is simpler (and hence faster to execute) than the simplicial agent in Appendix D. The training runs for the relational and simplicial agents are shown in Figure 9 and Figure 10 of Appendix F, together with analysis and visualization of the 1-and 2-simplicial attention in specific examples. In the reported experiments we use only two Transformer blocks; we performed two trials of a relational agent using four Transformer blocks, but after 5.5 × 10 9 timesteps neither trial exceeded the 0.85 plateau in terms of fraction solved. Our overall therefore suggest that the 2-simplicial Transformer is more powerful than the standard Transformer, with its performance not matched by adding greater depth. This is further supported by the fact on a time-adjusted basis, the 2-simplicial model still converges faster than the ordinary model; see Figure 8 of Appendix D. We analyse the simplicial agent to establish that it has learned to use the 2-simplicial attention, and to provide some intuition for why 2-simplices are useful; additional details are in Appendix F. The analysis is complicated by the fact that our 2 × 2 convolutional layers (of which there are two) are not padded, so the number of entities processed by the Transformer blocks is (R − 2)(C − 1) where the original game board is R × C and there is an extra column for the inventory (here R is the number of rows). This means there is not a one-to-one correspondence between game board tiles and entities; for example, all the experiments reported in Figure 6 are on a 7 × 9 board, so that there are N = 40 Transformer entities which can be arranged on a 5 × 8 grid (information about this grid is passed to the Transformer blocks via the positional encoding). Nonetheless we found that for trained agents there is a strong relation between a tile in position (y, x) and the Transformer entity with index This correspondence is presumed in the following analysis, and in our visualisations. Displayed in Figure 7 are attention distributions for simplicial agent A of Figure 10. The four images in the top right show the ordinary attention of the virtual entities in the first iteration of the simplicial Transformer block: in the first head, the first virtual entity attends strongly to a particular lock, while the second head of the second virtual entity attends strongly to the corresponding key. Shown at the bottom of Figure 7 is the 2-simplicial attention in the second iteration of the simplicial Transformer block. The columns are query entities i and rows are key entity pairs (j, k) in lexicographic order,,,. Entity 17 is the top lock on the Gem, 25 is the bottom lock on the Gem, 39 is the player. We may therefore infer, from our earlier description of the ordinary attention of the virtual entities, that the agent "perceives" the 2-simplex with query entity 25 as shown. In general we observe that the top and bottom locks on the Gem, the player, and the entities 7, 15 associated to the inventory often have a non-generic 2-simplicial attention, which strongly suggests that the simplicial agent has learned to use 2-simplices in a meaningful way. Figure 7: Visualization of 2-simplicial attention in step 18 of an episode. On general grounds one might expect that in the limit of infinite experience, any reinforcement learning agent with a sufficiently deep neural network will be able to solve any environment, in-cluding those like bridge BoxWorld that involve higher-order relations between entities. In practice, however, we do not care about the infinite computation limit. In the regime of bounded computation it is reasonable to introduce biases towards learning representations of structures that are found in a wide range of environments that we consider important. We argue that higher-order relations between entities are an important example of such structures, and that the 2-simplicial Transformer is a natural inductive bias for 3-way interactions between entities. We have given preliminary evidence for the utility of this bias by showing that in the bridge BoxWorld environment the simplicial agent has better performance than a purely relational agent, and that this performance involves in a meaningful way the prediction of 3-way interactions (or 2-simplices). We believe that simplicial Transformers may be useful for any problem in which higher-order relations between entities are important. The long history of interactions between logic and algebra is a natural source of inspiration for the design of inductive biases in deep learning. In this paper we have exhibited one example: Boole's idea, that relationships between entities can be modeled by multiplication in an algebra, may be realised in the context of deep learning as an augmentation to the Transformer architecture using Clifford algebras of spaces of representations. The Transformer model and descendents such as the Universal Transformer can be viewed as general units for computing with learned representations; in this sense they have a similar conceptual role to the Neural Turing Machine (NTM) and Differentiable Neural Computer . As pointed out in (, §4) one can view the Transformer as a block of parallel RNNs (one for each entity) which update their hidden states at each time step by attending to the sequence of hidden states of the other RNNs at the previous step. We expand on those remarks here in order to explain the connection between the 2-simplicial Transformer and earlier work in the NLP literature, which is written in terms of RNNs. We consider a NTM with content-based addressing only and no sharpening. The core of the NTM is an RNN controller with update rule where W, U, b are weight matrices, x is the current input symbol, h is the previous hidden state, h is the next hidden state and M is the output of the memory read head where there are N memory slots containing M 1,... M N, q is a query generated from the hidden state of the RNN by a weight matrix q = Zh, and We omit the mechanism for writing to the memory here, since it is less obvious how that relates to the Transformer; see (, §3.2). Note that while we can view M j as the "hidden state" of memory slot j, the controller's hidden state and the hidden states of the memory slots play asymmetric roles, since the former is updated with a feedforward network at each time step, while the latter is not. The Transformer with shared transition functions between layers is analogous to a NTM with this asymmetry removed: there is no longer a separate recurrent controller, and every memory slot is updated with a feedforward network in each timestep. To explain, view the entity representations e 1,..., e N of the Transformer as the hidden states of N parallel RNNs. The new representation is where the attention term is and q i = Ze i is a query vector obtained by a weight matrix from the hidden state, the k j = Ke j are key vectors and v j = V e j is the value vector. Note that in the Transformer the double role of M j in the NTM has been replaced by two separate vectors, the key and value, and the cosine similarity K[−, −] has been replaced by the dot product. Having now made the connection between the Transformer and RNNs, we note that the second-order RNN (; ; ;) and the similar multiplicative RNN have in common that the update rule for the hidden state of the RNN involves a term V (x ⊗ h) which is a linear function of the tensor product of the current input symbol x and the current hidden state h. One way to think of this is that the weight matrix V maps inputs x to linear operators on the hidden state. In the update rule contains a term V (e 1 ⊗ e 2) where e 1, e 2 are entity vectors, and this is directly analogous to our construction. The continuous Hopfield network (, Ch.42) with N nodes updates in each timestep a sequence of vectors by the rules for some parameter η. The Transformer block may therefore be viewed as a refinement of the Hopfield network, in which the three occurrences of entity vectors in are replaced by query, key and value vectors W Q e i, W K e j, W V e j respectively, the nonlinearity is replaced by a feedforward network with multiple layers, and the dynamics are stabilised by layer normalisation. The initial representations e i also incorporate information about the underlying lattice, via the positional embeddings. The idea that the structure of a sentence acts to transform the meaning of its parts is due to Frege (Frege, 1892) and underlies the denotational semantics of logic. From this point of view the Transformer architecture is an inheritor both of the logical tradition of denotational semantics, and of the statistical mechanics tradition via Hopfield networks. The volume of an n-simplex in R n with vertices at 0, v 1,..., v n is which is 1 n! times the volume of the n-dimensional parallelotope which shares n edges with the nsimplex. In our applications the space of representations H is high dimensional, but we wish to speak of the volume of k-simplices for k < dim(H) and use those volumes to define the coefficients of our simplicial attention. The theory of Clifford algebras is one appropriate framework for such calculations. Let H be an inner product space with pairing (v, w) → v · w. The Clifford algebra Cl(H) is the associative unital R-algebra generated by the vectors v ∈ H with relations vw + wv = 2(v · w) · 1. The canonical k-linear map H −→ Cl(H) is injective, and since v 2 = v 2 · 1 in Cl(H), any nonzero vector v ∈ H is a unit in the Clifford algebra. While as an algebra Cl(H) is only Z 2 -graded, there is nonetheless a Z-grading of the underlying vector space which can be defined as follows: let {e i} n i=1 be an orthonormal basis of H, then the set is a basis for Cl(H), with m ranging over the set {0, . . ., n}. If we assign the basis element e i1 · · · e im the degree m, then this determines a Z-grading [−] k of the Clifford algebra which is easily checked to be independent of the choice of basis. Definition C.1. [A] k denotes the homogeneous component of A ∈ Cl(H) of degree k. There is an operation on elements of the Clifford algebra called reversion in geometric algebra (, p.45) which arises as follows: the opposite algebra Cl(H) op admits a k-linear map j: H −→ Cl(H) op with j(v) = v which satisfies j(v)j(w) + j(w)j(v) = 2(v · w) · 1, and so by the universal property there is a unique morphism of algebras op which restricts to the identity on H..., v k ∈ H and (−) † is homogeneous of degree zero with respect to the Z-grading. Using this operation we can define the magnitude (, p.46) of any element of the Clifford algebra. and in particular for v ∈ H we have |v| = v. Lemma C.4. Set n = dim(H). Then for A ∈ Cl(H) we have Proof. See (, Chapter 2 (1.33)). Example C.5. For a, b, c ∈ H the lemma gives Remark C.6. Given vectors v 1,..., v k ∈ H the wedge product v 1 ∧ · · · ∧ v k is an element in the exterior algebra H. Using the chosen basis B we can identify the underlying vector space of Cl(H) with H and using this identification (set where S k is the permutation group on k letters. That is, the top degree piece of v 1 · · · v k in Cl(H) is always the wedge product. It is then easy to check that the squared magnitude of this wedge product is The term in the innermost bracket is the determinant of the k × k submatrix with columns j = (j 1, . . ., j k) and in the special case where k = n = dim(H) we see that the squared magnitude is just the square of the determinant of the matrix (λ ij) 1≤i,j≤n. The wedge product of k-vectors in H can be thought of as an oriented k-simplex, and the magnitude of this wedge product in the Clifford algebra computes the volume. Definition C.7. The volume of a k-simplex in H with vertices 0, v 1,..., v k is Definition C.8. Given v 1,..., v k ∈ H the k-fold unsigned scalar product is By Lemma C.4 and we have which gives the desired generalisation of the equations in Figure 2. Example C.9. For k = 2 the unsigned scalar product is the absolute value of the dot product, a, b = |a · b|. For k = 3 we obtain the formulas of Definition 2.2, from which it is easy to check that a, b, c = a b c cos 2 θ ab + cos 2 θ bc + cos 2 θ ac − 2 cos θ ab cos θ ac cos θ bc where θ ac, θ ab, θ ac are the angles between a, b, c. The geometry of the three-dimensional case is more familiar: if dim(H) = 3 then |[abc] 3 | is the absolute value of the determinant by, so that Vol 3 = With these formulas in mind the geometric content of the following lemma is clear: (ii) If the v i are all pairwise orthogonal then v 1,..., v k = 0. (iii) The set {v 1, . . ., v k} is linearly dependent if and only if v 1,..., (v) For λ 1,..., λ k ∈ R, we have For more on simplicial methods in the context of geometric algebra see (; The experiments in the original BoxWorld paper contain an unreported cap on timesteps per episode (an episode horizon) of 120 timesteps . We have chosen to run our experiments without an episode horizon, and since this means our reported sample complexities diverge substantially from the original paper (some part of which it seems reasonable to attribute to the lack of horizon) it is necessary to justify this choice. When designing an architecture for deep reinforcement learning the goal is to reduce the expected generalisation error (, §8.1.1) with respect to some class of similar environments. Although this class is typically difficult to specify and is often left implicit, in our case the class includes a range of visual logic puzzles involving spatial navigation, which can be solved without memory 2. A learning curriculum undermines this goal, by making our expectations of generalisation conditional on the provision of a suitable curriculum, whose existence for a given member of the problem class may not be clear in advance. The episode horizon serves as a de facto curriculum, since early in training it biases the distribution of experience rollouts towards the initial problems that an agent has to solve (e.g. learning to pick up the loose key). In order to avoid compromising our ability to expect generalisation to similar puzzles which do not admit such a useful curriculum, we have chosen not to employ an episode horizon. Fortunately, the relational agent performs well even without a curriculum on the original BoxWorld, as our show. In Figure 6 of Section 5, the horizontal axis was environment steps. However, since the simplicial agent has a more complex model, each environment step takes longer to execute and the gradient descent steps are slower. In a typical experiment run on the GCP configuration, the training throughput of the relational agent is 1.9 × 10 4 environment frames per second (FPS) and that of the simplicial agent is 1.4 × 10 4 FPS. The relative performance gap decreases as the GPU memory and the number of IMPALA workers are increased, and this is consistent with the fact that the primary performance difference appears to be the time taken to compute the gradients (35ms vs 80ms). In Figure 8 we give the time-adjusted performance of the simplicial agent (the graph for the relational agent is as before) where the x-axis of the graph of the simplicial agent is scaled by 1.9/1.4. In principle there is no reason for a significant performance mismatch: the 2-simplicial attention can be run in parallel to the ordinary attention (perhaps with two iterations of the 1-simplicial attention per iteration of the 2-simplicial attention) so that with better engineering it should be possible to reduce this gap. 2 The bridge is the unique box both of whose colours appear three times on the board. However, this is not a reliable strategy for detecting bridges for an agent without memory, because once the agent has collected some of the keys on the board, some of the colours necessary to make this deduction may no longer be present. Our experiments involve only a small number of virtual entities, and a small number of iterations of the Transformer block: it is possible that for large numbers of virtual entities and iterations, our choices of layer normalisation are not optimal. Our aim was to test the viability of the simplicial Transformer starting with the minimal configuration, so we have also not tested multiple heads of 2-simplicial attention. Deep reinforcement learning is notorious for poor reproducibility , and in an attempt to follow the emerging best practices we are releasing our agent and environment code, trained agent weights, and training notebooks . The training runs for the relational and simplicial agents are shown in Figure 9 and Figure 10 respectively. In this Appendix we provide further details relating to the analysis of the attention of the trained simplicial agent in Section 6. Across our four trained simplicial agents, the roles of the virtual entities and heads vary: the following comments are all in the context of the best simplicial agent (simplicial agent A of Figure 10) but we observe similar patterns in the other trials. The standard entities are now indexed by 0 ≤ i ≤ 39 and virtual entities by i = 40, 41. In the first iteration of the 2-simplicial Transformer block, the first 1-simplicial head appears to propagate information about the inventory. At the beginning of an episode the attention of each standard entity is distributed between entities 7, 15, 23, 31 (the entities in the rightmost column), it concentrates sharply on 7 (the entity closest to the first inventory slot) after the acquisition of the first loose key, and sharply on 7, 15 after the acquisition of the second loose key. The second 1-simplicial head seems to acquire the meaning described in , where tiles of the same colour attend to one another. A typical example is shown in Figure 11. The video of this episode is available online . Figure 11: Visualisation of 1-simplicial attention in first Transformer block, between standard entities in heads one and two. The vertical axes on the second and third images are the query index 0 ≤ i ≤ 39, the horizontal axes are the key index 0 ≤ j ≤ 39. The standard entities are updated using 2-simplices in the first iteration of the 2-simplicial Transformer block, but this is not interesting as initially the virtual entities are learned embedding vectors, containing no information about the current episode. So we restrict our analysis to the 2-simplicial attention in the second iteration of the Transformer block. For the analysis, it will be convenient to organise episodes of bridge BoxWorld by their puzzle type, which is the tuple (a, b, c) where 1 ≤ a ≤ 3 is the solution length, 1 ≤ b ≤ a is the bridge source and a + 1 ≤ c ≤ 2a is the bridge target, with indices increasing with the distance from the gem. The episodes in Figures 4 and 7 have type. Figure 12: Visualisation of the 2-simplicial attention in the second Transformer block in step 13 of an episode of puzzle type. Entity 1 is the top lock on the Gem, 15 is associated with the inventory, 36 is the lock directly below the player. Shown is a 2-simplex with target 15. Figure 13: Visualisation of the 2-simplicial attention in the second Transformer block in step 29 of an episode of puzzle type. Entity 7 is associated with the inventory, 17 is the player. Shown is a 2-simplex with target 17. To give more details we must first examine the content of the virtual entities after the first iteration, which is a function of the 1-simplicial attention of the virtual entities in the first iteration. In Figures 7, 12, 13 we show these attention distributions multiplied by the pixels in the region [1, R − 2] × [1, C − 1] of the original board, in the second and third columns of the second and third rows. Let f 1 = e 40 and f 2 = e 41 denote the initial representations of the first and second virtual entities, before the first iteration. We use the index z ∈ {1, 2} to stand for a virtual entity. In the first iteration the representations are updated by to where the sum is over all entities α, the a z α are the attention coefficients of the first 1-simplicial head and the coefficients b z α are the attention of the second 1-simplicial head. Writing 0 1, 0 2 for the zero vector in H 1 1, H 1 2 respectively, this can be written as For a query entity i the vector propagated by the 2-simplicial part of the second iteration has the following terms, where Here A i j,k is the 2-simplicial attention with logits p i, l ) is the ith column in our visualisations of the 2-simplicial attention, so in the situation of Figure 7 with i = 25 we have A 25 1,2 ≈ 1 and hence the output of the 2-simplicial head used to update the entity representation of the bottom lock on the Gem is approximately B(f 1 ⊗ f 2). If we ignore the layer normalisation, feedforward network and skip connection in then f 1 ≈ v 1 ⊕ 0 2 and f 2 ≈ 0 1 ⊕ v 0 so that the output of the 2-simplicial head with target i = 25 is approximately Following Boole (Boole, 1847) and Girard it is natural to read the "product" as a conjunction (consider together the entity 1 and the entity 0) and the sum in as a disjunction. An additional layer normalisation is applied to this vector, and the is concatenated with the incoming information for entity 25 from the 1-simplicial attention, before all of this is passed through to form e 25. Given that the output of the 2-simplicial head is the only nontrivial difference between the simplicial and relational agent (with a transformer depth of two, the first 2-simplicial Transformer block only updates the standard entities with information from embedding vectors) the performance differences reported in Figure 6 suggest that this output is informative about avoiding bridges. In the training curves of the agents of Figure 9 and Figure 10 we observe a common plateau at a win rate of 0.85. In Figure 14 we show the per-puzzle win rate of simplicial agent A and relational agent A, on puzzles. These graphs make clear that the transition of both agents to the plateau at 0.85 is explained by solving the type (and to a lesser degree by progress on all puzzle types with b = 1). In Figure 14 and Figure 15 we give the per-puzzle win rates for a small sample of other puzzle types. Shown are the mean and standard deviation of 100 runs across various checkpoints of simplicial agent A and relational agent A. As originally presented in the optimisation algorithm RMSProp is a mini-batch version of Rprop, where instead of dividing by a different number in every mini-batch 3, 3, 5),. (namely, the absolute value of the gradient) we force this number to be similar for adjacent minibatches by keeping a moving average of the square of the gradient. In more detail, one step Rprop is computed by the algorithm κ is the learning rate, x i is a weight, g i is the associated gradient and ε is a small constant (the TensorFlow default value is 10 −10) added for numerical stability. The idea of Rprop is to update weights using only the sign of the gradient: every weight is updated by the same absolute amount κ in each step, with only the sign g i / √ r i = g i /|g i | of the update varying with i. The algorithm RMSprop was introduced as a refinement of Rprop: p is the decay rate (in our experiments the value is 0.99). Clearly Rprop is the p → 0 limit of RMSprop. For further see (, §8.5 In recent years there has been a trend in the literature towards using RMSprop with large values of the hyperparameter ε. For example in RMSProp is used with ε = 0.1, which is also one of the range of values in (, . This "large ε RMSProp" seems to have originated in (, §8). To understand what large ε RMSProp is doing, let us rewrite the algorithm as where S is the sigmoid S(u) = u/ √ 1 + u 2 which asymptotes to 1 as u → +∞ and is wellapproximated by the identity function for small u. We see a new multiplicative factor S(r i /ε) in the optimisation algorithm. Note that √ r i is a moving average of |g i |. Recall the original purpose of Rprop was to update weights using only the sign of the gradient and the learning rate, namely κg i / √ r i. The new S factor in the above reinserts the size of the gradient, but scaled by the sigmoid to be in the unit interval. In the limit ε → 0 we squash the outputs of the sigmoid up near 1 and the standard conceptual description of RMSProp applies. But as ε → 1 the sigmoid S(√ r i) has the effect that for large stable gradients we get updates of size κ and for small stable gradients we get updates of the same magnitude as the gradient. In , large ε RMSprop is a form of RMSprop with smoothed gradient clipping (, §10.11.1). It is no simple matter to define logical reasoning nor to recognise when an agent (be it an animal or a deep reinforcement learning agent) is employing such reasoning . We therefore begin by returning to Aristotle, who viewed logic as the study of general patterns by which one could distinguish valid and invalid forms of philosophical argumentation; this study having as its purpose the production of strategies for winning such argumentation games (; ;). In this view, logic involves • two players with one asserting the truth of a proposition and attempting to defend it, and the latter asserting its falsehood and attempting to refute it, and an • observer attempting to learn the general patterns which are predictive of which of the two players will win such a game given some intermediate state. Suppose we observe over a series of games 4 that a player is following an explicit strategy which has been distilled from general patterns observed in a large distribution of games, and that by following this strategy they almost always win. A component of that explicit strategy can be thought of as logical reasoning to the degree that it consists of rules that are independent of the particulars of the game (, §11.25). The problem of recognising logical reasoning in behaviour is therefore twofold: the strategy employed by a player is typically implicit, and even if we can recognise explicit components of the strategy, in practice there is not always a clear way to decide which rules are domain-specific. In mathematical logic the idea of argumentation games has been developed into a theory of mathematical proof as strategy in the game semantics of linear logic where one player (the prover) asserts a proposition G and the other player (the refuter) interrogates this assertion. Published as a conference paper at ICLR 2020 Consider a reinforcement learning problem in which the deterministic environment encodes G together with a multiset of hypotheses Γ which are sufficient to prove G. Such a pair is called a sequent and is denoted Γ G. The goal of the agent (in the role of prover) is to synthesise a proof of G from Γ through a series of actions. The environment (in the role of refuter) delivers a positive reward if the agent succeeds, and a negative reward if the agent's actions indicate a commitment to a line of proof which cannot possibly succeed. Consider a deep reinforcement learning agent with a policy network parametrised by a vector of weights w ∈ R D and a sequence of full-episode rollouts of this policy in the environment, each of which either ends with the agent constructing a proof (prover wins) or failing to construct a proof (refuter wins) with the sequent Γ G being randomly sampled in each episode. Viewing these episodes as instances of an argumentation game, the goal of Aristotle's observer is to learn from this data to predict, given an intermediate state of some particular episode, which actions by the prover will lead to success (proof) or failure (refutation). As the reward is correlated with success and failure in this sense, the goal of the observer may be identified with the training objective of the action-value network underlying the agent's policy, and we may identify the triple player, opponent, observer with the triple agent, environment and optimisation process. If this process succeeds, so that the trained agent wins in almost every episode, then by definition the weights w are an implicit strategy for proving sequents Γ G. This leads to the question: is the deep reinforcement learning agent parametrised by w performing logical reasoning? We would have no reason to deny that logical reasoning is present if we were to find, in the weights w and dynamics of the agent's network, an isomorphic image of an explicit strategy that we recognise as logically correct. In general, however, it seems more useful to ask to what degree the behaviour is governed by logical reasoning, and thus to what extent we can identify an approximate homomorphic image in the weights and dynamics of a logically correct explicit strategy. Ultimately this should be automated using "logic probes" along the lines of recent developments in neural network probes (; ; ; ;). The design of the BoxWorld environment was intended to stress the planning and reasoning components of an agent's policy (, p.2) and for this reason it is the underlying logical structure of the environment that is of central importance. To explain the logical structure of BoxWorld and bridge BoxWorld we introduce the following notation: given a colour c, we use C to stand for the proposition that a key of this colour is obtainable. Each episode expresses its own set of basic facts, or axioms, about obtainability. For instance, a loose key of colour c gives C as an axiom, and a locked box requiring a key of colour c in order to obtain a key of colour d gives an axiom that at first glance appears to be the implication C −→ D of classical logic. However, since a key may only be used once, this is actually incorrect; instead the logical structure of this situation is captured by the linear implication C D of linear logic . With this understood, each episode of the original BoxWorld provides in visual form a set of axioms Γ such that a strategy for obtaining the Gem is equivalent to a proof of Γ G in intuitionistic linear logic, where G stands for the proposition that the Gem is obtainable. There is a general correspondence in logic between strategies and proofs which we recall in Appendix I. To describe the logical structure of bridge BoxWorld we need to encode the fact that two keys (say a green key and a blue key) are required to obtain the Gem. Once again, it is the linear conjunction ⊗ of linear logic (also called the tensor product) rather than the conjunction of classical logic that properly captures the semantics. The axioms Γ encoded in an episode of bridge BoxWorld contain a single formula of the form X 1 ⊗ X 2 G where x 1, x 2 are the colours of the keys on the Gem, and again a strategy is equivalent to a proof of Γ G. In , the logical structure of the original BoxWorld consists of a fragment of linear logic containing only the connective, while bridge BoxWorld captures a slightly larger fragment containing and ⊗. Next we explain the correspondence between agent behaviour in bridge BoxWorld and proofs in linear logic. For an introduction to linear logic tailored to the setting of games see (, Ch.2). Recall that to each colour c we have associated a proposition C which can be read as "the key of colour c is obtainable". If a box β appears in an episode of bridge BoxWorld (this includes loose in the sentence "There was a cat and he liked to sleep"). These representations of relations take the form of query and key vectors governing the passing of messages between entities; messages update entity representations over several rounds of computation until the final representations reflect not just the meaning of words but also their context in a sentence. There is some evidence that the geometry of these final representations serve to organise word representations in a syntax tree, which could be seen as the appropriate analogue to two-dimensional space in the context of language . The Transformer may therefore be viewed as an inductive bias for learning structural representations which are graphs, with entities as vertices and relations as edges. While a graph is a discrete mathematical object, there is a naturally associated topological space which is obtained by gluing 1-simplices (copies of the unit interval) indexed by edges along 0-simplices (points) indexed by vertices. There is a general mathematical notion of a simplicial set which is a discrete structure containing a set of n-simplices for all n ≥ 0 together with an encoding of the incidence relations between these simplices. Associated to each simplicial set is a topological space, obtained by gluing together vertices, edges, triangles (2-simplices), tetrahedrons (3-simplices), and so on, according to the instructions contained in the simplicial set. Following the aforementioned works in neuroscience (; ; ; ; ;) and their emphasis on spatial structure, it is natural to ask if a simplicial inductive bias for learning structural representations can facilitate abstract reasoning. This question partly motivated the developments in this paper.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkecJ6VFvr
We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.
We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data. One of the central questions in science is forecasting: given the past history, how well can we predict the future? In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics. Examples of such systems abound in science and engineering, from biological neural network activity, fluid turbulence, to climate and traffic systems (see FIG0). Since current forecasting systems are unable to faithfully represent the higher-order dynamics, they have limited ability for accurate long-term forecasting. Therefore, a key challenge is accurately modeling nonlinear dynamics and obtaining stable long-term predictions, given a dataset of realizations of the dynamics. Here, the forecasting problem can be stated as follows: how can we efficiently learn a model that, given only few initial states, can reliably predict a sequence of future states over a long horizon of T time-steps? Common approaches to forecasting involve linear time series models such as auto-regressive moving average (ARMA), state space models such as hidden Markov model (HMM), and deep neural networks. We refer readers to a survey on time series forecasting by BID2 and the references therein. A recurrent neural network (RNN), as well as its memory-based extensions such as the LSTM, is a class of models that have achieved good performance on sequence prediction tasks from demand forecasting BID5 to speech recognition BID15 and video analysis BID9. Although these methods can be effective for short-term, smooth dynamics, neither analytic nor data-driven learning methods tend to generalize well to capturing long-term nonlinear dynamics and predicting them over longer time horizons. To address this issue, we propose a novel family of tensor-train recurrent neural networks that can learn stable long-term forecasting. These models have two key features: they 1) explicitly model the higher-order dynamics, by using a longer history of previous hidden states and high-order state interactions with multiplicative memory units; and 2) they are scalable by using tensor trains, a structured low-rank tensor decomposition that greatly reduces the number of model parameters, while mostly preserving the correlation structure of the full-rank model. In this work, we analyze Tensor-Train RNNs theoretically, and also experimentally validate them over a wide range of forecasting domains. Our contributions can be summarized as follows:• We describe how TT-RNNs encode higher-order non-Markovian dynamics and high-order state interactions. To address the memory issue, we propose a tensor-train (TT) decomposition that makes learning tractable and fast.• We provide theoretical guarantees for the representation power of TT-RNNs for nonlinear dynamics, and obtain the connection between the target dynamics and TT-RNN approximation. In contrast, no such theoretical are known for standard recurrent networks.• We validate TT-RNNs on simulated data and two real-world environments with nonlinear dynamics (climate and traffic). Here, we show that TT-RNNs can forecast more accurately for significantly longer time horizons compared to standard RNNs and LSTMs. Forecasting Nonlinear Dynamics Our goal is to learn an efficient model f for sequential multivariate forecasting in environments with nonlinear dynamics. Such systems are governed by dynamics that describe how a system state x t ∈ R d evolves using a set of nonlinear differential equations: DISPLAYFORM0 where ξ i can be an arbitrary (smooth) function of the state x t and its derivatives. Continous time dynamics are usually described by differential equations while difference equations are employed for discrete time. In continuous time, a classic example is the first-order Lorenz attractor, whose realizations showcase the "butterfly-effect", a characteristic set of double-spiral orbits. In discretetime, a non-trivial example is the 1-dimensional Genz dynamics, whose difference equation is: DISPLAYFORM1 where x t denotes the system state at time t and c, w are the parameters. Due to the nonlinear nature of the dynamics, such systems exhibit higher-order correlations, long-term dependencies and sensitivity to error propagation, and thus form a challenging setting for learning. Given a sequence of initial states x 0... x t, the forecasting problem aims to learn a model f DISPLAYFORM2 that outputs a sequence of future states x t+1... x T. Hence, accurately approximating the dynamics ξ is critical to learning a good forecasting model f and accurately predicting for long time horizons. First-order Markov Models In deep learning, common approaches for modeling dynamics usually employ first-order hidden-state models, such as recurrent neural networks (RNNs). An RNN with a single RNN cell recursively computes the output y t from a hidden state h t using: DISPLAYFORM3 where f is the state transition function, g is the output function and θ are model parameters. An RNN only considers the most recent hidden state in its state transition function. A common parametrization scheme for is a nonlinear activation function applied to a linear map of x t and h t−1 as: LSTMs BID8 and GRUs BID3. For instance, LSTM cells use a memory-state, which mitigate the "exploding gradient" problem and allow RNNs to propagate information over longer time horizons. Although RNNs are very expressive, they compute h t only using the previous state h t−1 and input x t. Such models do not explicitly model higher-order dynamics and only implicitly model long-term dependencies between all historical states h 0... h t, which limits their forecasting effectiveness in environments with nonlinear dynamics. DISPLAYFORM4 To effectively learn nonlinear dynamics, we propose Tensor-Train RNNs, or TT-RNNs, a class of higher-order models that can be viewed as a higher-order generalization of RNNs. We developed TT-RNNs with two goals in mind: explicitly modeling 1) L-order Markov processes with L steps of temporal memory and 2) polynomial interactions between the hidden states h · and x t.First, we consider longer "history": we keep length L historic states: DISPLAYFORM0 where f is an activation function. In principle, early work BID7 has shown that with a large enough hidden state size, such recurrent structures are capable of approximating any dynamics. Second, to learn the nonlinear dynamics ξ efficiently, we also use higher-order moments to approximate the state transition function. We construct a higher-order transition tensor by modeling a degree P polynomial interaction between hidden states. Hence, the TT-RNN with standard RNN cell is defined by: DISPLAYFORM1 where α index the hidden dimension, i · index historic hidden states and P is the polynomial degree. Here, we defined the L-lag hidden state as: DISPLAYFORM2 We included the bias unit 1 to model all possible polynomial expansions up to order P in a compact form. The TT-RNN with LSTM cell, or "TLSTM", is defined analogously as: DISPLAYFORM3 where • denotes the Hadamard product. Note that the bias units are again included. TT-RNN serves as a module for sequence-to-sequence (Seq2Seq) framework BID18, which consists of an encoder-decoder pair (see FIG1). We use tensor-train recurrent cells both the encoder and decoder. The encoder receives the initial states and the decoder predicts x t+1,..., x T. For each timestep t, the decoder uses its previous prediction y t as an input. Unfortunately, due to the "curse of dimensionality", the number of parameters in W α with hidden size H grows exponentially as O(HL P), which makes the high-order model prohibitively large to train. To overcome this difficulty, we utilize tensor networks to approximate the weight tensor. Such networks encode a structural decomposition of tensors into low-dimensional components and have been shown to provide the most general approximation to smooth tensors BID11. The most commonly used tensor networks are linear tensor networks (LTN), also known as tensor-trains in numerical analysis or matrix-product states in quantum physics BID12.A tensor train model decomposes a P -dimensional tensor W into a network of sparsely connected low-dimensional tensors DISPLAYFORM0 DISPLAYFORM1 as depicted in Figure. When r 0 = r P = 1 the {r d} are called the tensor-train rank. With tensortrain, we can reduce the number of parameters of TT-RNN from (HL + 1) P to (HL + 1)R 2 P, with R = max d r d as the upper bound on the tensor-train rank. Thus, a major benefit of tensor-train is that they do not suffer from the curse of dimensionality, which is in sharp contrast to many classical tensor decompositions, such as the Tucker decomposition. A significant benefit of using tensor-trains is that we can theoretically characterize the representation power of tensor-train neural networks for approximating high-dimensional functions. We do so by analyzing a class of functions that satisfies some regularity condition. For such functions, tensor-train decompositions preserve weak differentiability and yield a compact representation. We combine this property with neural network estimation theory to bound the approximation error for TT-RNN with one hidden layer in terms of: 1) the regularity of the target function f, 2) the dimension of the input space, 3) the tensor train rank and 4) the order of the tensor. In the context of TT-RNN, the target function f (x), with x = s ⊗... ⊗ s, describes the state transitions of the system dynamics, as in. Let us assume that f (x) is a Sobolev function: f ∈ H k µ, defined on the input space I = I 1 × I 2 × · · · I d, where each I i is a set of vectors. The space H k µ is defined as the functions that have bounded derivatives up to some order k and are L µ -integrable: DISPLAYFORM0 where D (i) f is the i-th weak derivative of f and µ ≥ 0. 1 Any Sobolev function admits a Schmidt decomposition: DISPLAYFORM1, where {λ} are the eigenvalues and {γ}, {φ} are the associated eigenfunctions. Hence, we can decompose the target function f ∈ H k µ as: DISPLAYFORM2 where DISPLAYFORM3 We can truncate to a low dimensional subspace (r < ∞), and obtain the functional tensor-train (FTT) approximation of the target function f: DISPLAYFORM4 In practice, TT-RNN implements a polynomial expansion of the state s as in, using powers [s, s ⊗2, · · ·, s ⊗p] to approximate f T T, where p is the degree of the polynomial. We can then bound the approximation error using TT-RNN, viewed as a one-layer hidden neural network: 1 A weak derivative generalizes the derivative concept for (non)-differentiable functions and is implicitly defined as: DISPLAYFORM5 DISPLAYFORM6 is the size of the state space, r is the tensor-train rank and p is the degree of high-order polynomials i.e., the order of tensor. For the full proof, see the Appendix. From this theorem we see: 1) if the target f becomes smoother, it is easier to approximate and 2) polynomial interactions are more efficient than linear ones in the large rank region: if the polynomial order increases, we require fewer hidden units n. This applies to the full family of TT-RNNs, including those using vanilla RNN or LSTM as the recurrent cell, as long as we are given a state transitions (x t, s t) → s t+1 (e.g. the state transition function learned by the encoder). We validated the accuracy and efficiency of TT-RNN on one synthetic and two real-world datasets, as described below; Detailed preprocessing and data statistics are deferred to the Appendix. Genz dynamics The Genz "product peak" (see FIG3 a) is one of the Genz functions BID6, which are often used as a basis for high-dimensional function approximation. In particular, BID1 used them to analyze tensor-train decompositions. We generated 10, 000 samples of length 100 using with w = 0.5, c = 1.0 and random initial points. Traffic The traffic data (see FIG3 b) of Los Angeles County highway network is collected from California department of transportation http://pems.dot.ca.gov/. The prediction task is to predict the speed readings for 15 locations across LA, aggregated every 5 minutes. After upsampling and processing the data for missing values, we obtained 8, 784 sequences of length 288.Climate The climate data (see FIG3 c) is collected from the U.S. Historical Climatology Network (USHCN) (http://cdiac.ornl.gov/ftp/ushcn_daily/). The prediction task is to predict the daily maximum temperature for 15 stations. The data spans approximately 124 years. After preprocessing, we obtained 6, 954 sequences of length 366. Experimental Setup To validate that TT-RNNs effectively perform long-term forecasting task in, we experiment with a seq2seq architecture with TT-RNN using LSTM as recurrent cells (TLSTM). For all experiments, we used an initial sequence of length t 0 as input and varied the forecasting horizon T. We trained all models using stochastic gradient descent on the length-T sequence regression loss L(y,ŷ) = T t=1 ||ŷ t − y t || 2 2, where y t = x t+1,ŷ t are the ground truth and model prediction respectively. For more details on training and hyperparameters, see the Appendix. We compared TT-RNN against 2 set of natural baselines: 1st-order RNN (vanilla RNN, LSTM), and matrix RNNs (vanilla MRNN, MLSTM), which use matrix products of multiple hidden states without factorization BID14 ). We observed that TT-RNN with RNN cells outperforms vanilla RNN and MRNN, but using LSTM cells performs best in all experiments. We also evaluated the classic ARIMA time series model and observed that it performs ∼ 5% worse than LSTM.Long-term Accuracy For traffic, we forecast up to 18 hours ahead with 5 hours as initial inputs. For climate, we forecast up to 300 days ahead given 60 days of initial observations. For Genz dynamics, we forecast for 80 steps given 5 initial steps. All are averages over 3 runs. We now present the long-term forecasting accuracy of TLSTM in nonlinear systems. FIG4 shows the test prediction error (in RMSE) for varying forecasting horizons for different datasets. We can see that TLSTM notably outperforms all baselines on all datasets in this setting. In particular, TLSTM is more robust to long-term error propagation. We observe two salient benefits of using TT-RNNs over the unfactorized models. First, MRNN and MLSTM can suffer from overfitting as the number of weights increases. Second, on traffic, unfactorized models also show considerable instability in their long-term predictions. These suggest that tensor-train neural networks learn more stable representations that generalize better for long-term horizons. To get intuition for the learned models, we visualize the best performing TLSTM and baselines in FIG5 for the Genz function "corner-peak" and the statetransition function. We can see that TLSTM can almost perfectly recover the original function, while LSTM and MLSTM only correctly predict the mean. These baselines cannot capture the dynamics fully, often predicting an incorrect range and phase for the dynamics. In FIG6 we show predictions for the real world traffic and climate dataset. We can see that the TLSTM corresponds significantly better with ground truth in long-term forecasting. As the ground truth time series is highly chaotic and noisy, LSTM often deviates from the general trend. While both MLSTM and TLSTM can correctly learn the trend, TLSTM captures more detailed curvatures due to the inherent high-order structure. Speed Performance Trade-off We now investigate potential trade-offs between accuracy and computation. FIG7 displays the validation loss with respect to the number of steps, for the best performing models on long-term forecasting. We see that TT-RNNs converge significantly faster than other models, and achieve lower validation-loss. This suggests that TT-RNN has a more efficient representation of the nonlinear dynamics, and can learn much faster as a . Hyper-parameter Analysis The TLSTM model is equipped with a set of hyper-parameters, such as tensor-train rank and the number of lags. We perform a random grid search over these hyperparameters and showcase the in Table 1. In the top row, we report the prediction RMSE for the largest forecasting horizon w.r.t tensor ranks for all the datasets with lag 3. When the rank is too low, the model does not have enough capacity to capture non-linear dynamics. when the rank is too high, the model starts to overfit. In the bottom row, we report the effect of changing lags (degree of orders in Markovian dynamics). For each setting, the best r is determined by cross-validation. For different forecasting horizon, the best lag value also varies. We have also evaluated TT-RNN on long-term forecasting for chaotic dynamics, such as the Lorenz dynamics (see FIG8). Such dynamics are highly sensitive to input perturbations: two close points can move exponentially far apart under the dynamics. This makes long-term forecasting highly challenging, as small errors can lead to catastrophic longterm errors. FIG8 shows that TT-RNN can predict up to T = 40 steps into the future, but diverges quickly beyond that. We have found no state-of-the-art prediction model is stable in this setting. Classic work in time series forecasting has studied auto-regressive models, such as the ARMA or ARIMA model BID2, which model a process x(t) linearly, and so do not capture nonlinear dynamics. Our method contrasts with this by explicitly modeling higher-order dependencies. Using neural networks to model time series has a long history. More recently, they have been applied to room temperature prediction, weather forecasting, traffic prediction and other domains. We refer to BID13 for a detailed overview of the relevant literature. From a modeling perspective, BID7 ) considers a high-order RNN to simulate a deterministic finite state machine and recognize regular grammars. This work considers a second order mapping from inputs x(t) and hidden states h(t) to the next state. However, this model only considers the most recent state and is limited to two-way interactions. BID17 proposes multiplicative RNN that allow each hidden state to specify a different factorized hidden-to-hidden weight matrix. A similar approach also appears in BID14, but without the factorization. Our method can be seen as an efficient generalization of these works. Moreover, hierarchical RNNs have been used to model sequential data at multiple resolutions, e.g. to learn both short-term and long-term human behavior BID20.Tensor methods have tight connections with neural networks. For example, BID4 shows convolutional neural networks have equivalence to hierarchical tensor factorizations. BID10 BID19 employs tensor-train to compress large neural networks and reduce the number of weights. BID19 forms tensors from reshaping inputs and decomposes the input-output weights. Our model forms tensors from high-order hidden states and decomposes the hidden-output weights. BID16 propose to parameterizes the supervised learning models with matrix-product states for image classification. This work however, to the best of our knowledge, is the first work to consider tensor networks in RNNS for sequential prediction tasks for learning in environments with nonlinear dynamics. In this work, we considered forecasting under nonlinear dynamics. We propose a novel class of RNNs -TT-RNN. We provide approximation guarantees for TT-RNN and characterize its representation power. We demonstrate the benefits of TT-RNN to forecast accurately for significantly longer time horizon in both synthetic and real-world multivariate time series data. As we observed, chaotic dynamics still present a significant challenge to any sequential prediction model. Hence, it would be interesting to study how to learn robust models for chaotic dynamics. In other sequential prediction settings, such as natural language processing, there does not (or is not known to) exist a succinct analytical description of the data-generating process. It would be interesting to further investigate the effectiveness of TT-RNNs in such domains as well. We provide theoretical guarantees for the proposed TT-RNN model by analyzing a class of functions that satisfy some regularity condition. For such functions, tensor-train decompositions preserve weak differentiability and yield a compact representation. We combine this property with neural network estimation theory to bound the approximation error for TT-RNN with one hidden layer, in terms of: 1) the regularity of the target function f, 2) the dimension of the input space, and 3) the tensor train rank. In the context of TT-RNN, the target function f (x) with x = s ⊗... ⊗ s, is the system dynamics that describes state transitions, as in. Let us assume that f (x) is a Sobolev function: f ∈ H k µ, defined on the input space I = I 1 × I 2 × · · · I d, where each I i is a set of vectors. The space H k µ is defined as the set of functions that have bounded derivatives up to some order k and are L µ -integrable: DISPLAYFORM0 where D (i) f is the i-th weak derivative of f and µ ≥ 0. Any Sobolev function admits a Schmidt decomposition: DISPLAYFORM0, where {λ} are the eigenvalues and {γ}, {φ} are the associated eigenfunctions. Hence, we can decompose the target function f ∈ H k µ as: DISPLAYFORM1 where DISPLAYFORM2 We can truncate Eqn 13 to a low dimensional subspace (r < ∞), and obtain the functional tensor-train (FTT) approximation of the target function f: DISPLAYFORM3.FTT approximation in Eqn 13 projects the target function to a subspace with finite basis. And the approximation error can be bounded using the following Lemma: Lemma 7.1 (FTT Approximation BID1). Let f ∈ H k µ be a Hölder continuous function, defined on a bounded domain I = I 1 × · · · × I d ⊂ R d with exponent α > 1/2, the FTT approximation error can be upper bounded as DISPLAYFORM4 for r ≥ 1 and DISPLAYFORM5 Lemma 7.1 relates the approximation error to the dimension d, tensor-train rank r,and the regularity of the target function k. In practice, TT-RNN implements a polynomial expansion of the input states s, using powers [s, s ⊗2, · · ·, s ⊗p] to approximate f T T, where p is the degree of the polynomial. We can further use the classic spectral approximation theory to connect the TT-RNN structure with the degree of the polynomial, i.e., the order of the tensor. Let DISPLAYFORM6 Given a function f and its polynomial expansion P T T, the approximation error is therefore bounded by: 2 A weak derivative generalizes the derivative concept for (non)-differentiable functions and is implicitly defined as: DISPLAYFORM7 Where p is the order of tensor and r is the tensor-train rank. As the rank of the tensor-train and the polynomial order increase, the required size of the hidden units become smaller, up to a constant that depends on the regularity of the underlying dynamics f. We trained all models using the RMS-prop optimizer and employed a learning rate decay of 0.8 schedule. We performed an exhaustive search over the hyper-parameters for validation. TAB3 reports the hyper-parameter search range used in this work. For all datasets, we used a 80% − 10% − 10% train-validation-test split and train for a maximum of 1e 4 steps. We compute the moving average of the validation loss and use it as an early stopping criteria. We also did not employ scheduled sampling, as we found training became highly unstable under a range of annealing schedules. Genz Genz functions are often used as basis for evaluating high-dimensional function approximation. In particular, they have been used to analyze tensor-train decompositions BID1. There are in total 7 different Genz functions. g 1 (x) = cos(2πw + cx), DISPLAYFORM0 2 π|x−w| DISPLAYFORM1 For each function, we generated a dataset with 10, 000 samples using FORMULA1 with w = 0.5 and c = 1.0 and random initial points draw from a range of [−0.1, 0.1].Traffic We use the traffic data of Los Angeles County highway network collected from California department of transportation http://pems.dot.ca.gov/. The dataset consists of 4 month speed readings aggregated every 5 minutes. Due to large number of missing values (∼ 30%) in the raw data, we impute the missing values using the average values of non-missing entries from other sensors at the same time. In total, after processing, the dataset covers 35 136, time-series. We treat each sequence as daily traffic of 288 time stamps. We up-sample the dataset every 20 minutes, which in a dataset of 8 784 sequences of daily measurements. We select 15 sensors as a joint forecasting tasks. Climate We use the daily maximum temperature data from the U.S. Historical Climatology Network (USHCN) daily (http://cdiac.ornl.gov/ftp/ushcn_daily/) contains daily measurements for 5 climate variables for approximately 124 years. The records were collected across more than 1 200 locations and span over 45 384 days. We analyze the area in California which contains 54 stations. We removed the first 10 years of day, most of which has no observations. We treat the temperature reading per year as one sequence and impute the missing observations using other non-missing entries from other stations across years. We augment the datasets by rotating the sequence every 7 days, which in a data set of 5 928 sequences. We also perform a DickeyFuller test in order to test the null hypothesis of whether a unit root is present in an autoregressive model. The test statistics of the traffic and climate data is shown in TAB5, which demonstrate the non-stationarity of the time series. Genz functions are basis functions for multi-dimensional FIG0 visualizes different Genz functions, realizations of dynamics and predictions from TLSTM and baselines. We can see for "oscillatory", "product peak" and "Gaussian ", TLSTM can better capture the complex dynamics, leading to more accurate predictions. Chaotic dynamics such as Lorenz attractor is notoriously different to lean in non-linear dynamics. In such systems, the dynamics are highly sensitive to perturbations in the input state: two close points can move exponentially far apart under the dynamics. We also evaluated tensor-train neural networks on long-term forecasting for Lorenz attractor and report the as follows: Lorenz The Lorenz attractor system describes a two-dimensional flow of fluids (see FIG8): dx dt = σ(y − x), dy dt = x(ρ − z) − y, dz dt = xy − βz, σ = 10, ρ = 28, β = 2.667.This system has chaotic solutions (for certain parameter values) that revolve around the so-called Lorenz attractor. We simulated 10 000 trajectories with the discretized time interval length 0.01. We sample from each trajectory every 10 units in Euclidean distance. The dynamics is generated using σ = 10 ρ = 28, β = 2.667. The initial condition of each trajectory is sampled uniformly random from the interval of [−0.1, 0.1]. FIG0 shows 45 steps ahead predictions for all models. HORNN is the full tensor TT-RNN using vanilla RNN unit without the tensor-train decomposition. We can see all the tensor models perform better than vanilla RNN or MRNN. TT-RNN shows slight improvement at the beginning state. TT-RNN shows more consistent, but imperfect, predictions, whereas the baselines are highly unstable and gives noisy predictions.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJJ0w--0W
Accurate forecasting over very long time horizons using tensor-train RNNs
Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods. To analyze real-world data, machine learning relies on models that can extract useful patterns. Deep Neural Networks (DNNs) are a popular choice for this purpose because they can learn flexible representations. Another popular choice are probabilistic graphical models (PGMs) which can find interpretable structures in the data. Recent work on combining these two types of models hopes to exploit their complimentary strengths and provide powerful models that are also easy to interpret BID10 BID14 BID0 BID3.To apply such hybrid models to real-world problems, we need efficient algorithms that can extract useful structure from the data. However, the two fields of deep learning and PGMs traditionally use different types of algorithms. For deep learning, stochastic-gradient methods are the most popular choice, e.g., those based on back-propagation. These algorithms are not only widely applicable, but can also employ amortized inference to enable fast inference at test time BID17 BID12. On the other hand, most popular algorithms for PGMs exploit the model's graphical conjugacy structure to gain computational efficiency, e.g., variational message passing (VMP) BID18, expectation propagation BID16, Kalman filtering BID4 BID5, and more recently natural-gradient variational inference BID9 and stochastic variational inference BID8. In short, the two fields of deep learning and probabilistic modelling employ fundamentally different inferential strategies and a natural question is, whether we can design algorithms that combine their respective strengths. There have been several attempts to design such methods in the recent years, e.g., BID14; BID3; BID0; BID10; BID2. Our work in this paper is inspired by the previous work of BID10 that aims to combine message-passing, natural-gradient, and amortized inference. Our proposed method in this paper simplifies and generalizes the method of BID10.To do so, we propose Structured Inference Networks (SIN) that incorporate the PGM structure in the standard inference networks used in variational auto-encoders (VAE) BID12 BID17. We derive conditions under which such inference networks can enable fast amortized inference similar to VAE. By using a recent VMP method of BID11, we The generative models are just like the decoder in VAE but they employ a structured prior, e.g., Fig. (a) has a mixture-model prior while Fig. (b) has a dynamical system prior. SINs, just like the encoder in VAE, mimic the structure of the generative model by using parameters φ. One main difference is that in SIN the arrows between y n and x n are reversed compared to the model, while rest of the arrows have the same direction.derive a variational message-passing algorithm whose messages automatically reduce to stochasticgradients for the deep components of the model, while perform natural-gradient updates for the PGM part. Overall, our algorithm enables Structured, Amortized, and Natural-gradient (SAN) updates and therefore we call our algorithm the SAN algorithm. We show that our algorithm give comparable performance to the method of BID10 while simplifying and generalizing it. The code to reproduce our is available at https://github.com/emtiyaz/vmp-for-svae/. We consider the modelling of data vectors y n by using local latent vectors x n. Following previous works BID10 BID0 BID14, we model the output y n given x n using a neural network with parameters θ NN, and capture the correlations among data vectors y:= {y 1, y 2, . . ., y N} using a probabilistic graphical model (PGM) over the latent vectors x:= {x 1, x 2, . . ., x N}. Specifically, we use the following joint distribution: DISPLAYFORM0 where θ NN and θ PGM are parameters of a DNN and PGM respectively, and θ:= {θ NN, θ PGM}.This combination of probabilistic graphical model and neural network is referred to as structured variational auto-encoder (SVAE) by BID10. SVAE employs a structured prior p(x|θ PGM) to extract useful structure from the data. SVAE therefore differs from VAE BID12 where the prior distribution over x is simply a multivariate Gaussian distribution p(x) = N (x|0, I) with no special structure. To illustrate this difference, we now give an example. Example (Mixture-Model Prior): Suppose we wish to group the outputs y n into K distinct clusters. For such a task, the standard Gaussian prior used in VAE is not a useful prior. We could instead use a mixture-model prior over x n, as suggested by BID10, DISPLAYFORM1 where z n ∈ {1, 2, . . ., K} is the mixture indicator for the n'th data example, and π k are mixing proportions that sum to 1 over k. Each mixture component can further be modelled, e.g., by using a Gaussian distribution p(x n |z n = k):= N (x n |µ k, Σ k) giving us the Gaussian Mixture Model (GMM) prior with PGM hyperparameters DISPLAYFORM2. The graphical model of an SVAE with such priors is shown in FIG0. This type of structured-prior is useful for discovering clusters in the data, making them easier to interpret than VAE.Our main goal in this paper is to approximate the posterior distribution p(x, θ|y). Specifically, similar to VAE, we would like to approximate the posterior of x by using an inference network. In VAE, this is done by using a function parameterized by DNN, as shown below: DISPLAYFORM3 where the left hand side is the posterior distribution of x, and the first equality is obtained by using the distribution of the decoder in the Bayes' rule. The right hand side is the distribution of the encoder where q is typically an exponential-family distribution whose natural-parameters are modelled by using a DNN f φ with parameters φ. The same function f φ (·) is used for all n which reduces the number of variational parameters and enables sharing of statistical strengths across n. This leads to both faster training and faster testing BID17.Unfortunately, for SVAE, such inference networks may give inaccurate predictions since they ignore the structure of the PGM prior p(x|θ PGM). For example, suppose y n is a time-series and we model x n using a dynamical system as depicted in FIG0. In this case, the inference network of FORMULA3 is not an accurate approximation since it ignores the time-series structure in x. This might in inaccurate predictions of distant future observations, e.g., prediction for an observation y 10 given the past data {y 1, y 2, y 3} would be inaccurate because the inference network has no path connecting x 10 to x 1, x 2, or x 3. In general, whenever the prior structure is important in obtaining accurate predictions, we might want to incorporate it in the inference network. A solution to this problem is to use an inference network with the same structure as the model but to replace all its edges by neural networks BID14 BID3 ). This solution is reasonable when the PGM itself is complex, but might be too aggressive when the PGM is a simple model, e.g., when the prior in FIG0 is a linear dynamical system. Using DNNs in such cases would dramatically increase the number of parameters which will lead to a possible deterioration in both speed and performance. BID10 propose a method to incorporate the structure of the PGM part in the inference network. For SVAE with conditionally-conjugate PGM priors, they aim to obtain a mean-field variational inference by optimizing the following standard variational lower bound 1: DISPLAYFORM4 where q(x|λ x) is a minimal exponential-family distribution with natural parameters λ x. To incorporate an inference network, they need to restrict the parameter of q(x|λ x) similar to the VAE encoder shown in, i.e., λ x must be defined using a DNN with parameter φ. For this purpose, they use a two-stage iterative procedure. In the first stage, they obtain λ * x by optimizing a surrogate lower bound where the decoder in is replaced by the VAE encoder of (highlighted in blue), DISPLAYFORM5 The optimal λ * x is a function of θ and φ and they denote it by λ * x (θ, φ). In the second stage, they substitute λ * x into and take a gradient step to optimize L(λ * x (θ, φ), θ) with respect to θ and φ. This is iterated until convergence. The first stage ensures that q(x|λ x) is defined in terms of φ similar to VAE, while the second stage improves the lower bound while maintaining this restriction. The advantage of this formulation is that when the factors q(x n |f φ (y n)) are chosen to be conjugate to p(x|θ PGM), the first stage can be performed efficiently using VMP. However, the overall method might be difficult to implement and tune. This is because the procedure is equivalent to an implicitly-constrained optimization 2 that optimizes with the constraint λ * x (θ, φ) = arg max λxL (λ x, θ, φ). Such constrained problems are typically more difficult to solve than their unconstrained counterparts, especially when the constraints are nonconvex BID6. Theoretically, the convergence of such methods is difficult to guarantee when the constraints are violated. In practice, this makes the implementation difficult because in every iteration the VMP updates need to run long enough to reach close to a local optimum of the surrogate lower bound. Another disadvantage of the method of BID10 is that its efficiency could be ensured only under restrictive assumptions on the PGM prior. For example, the method does not work for PGMs that contain non-conjugate factors because in that case VMP cannot be used to optimize the surrogate lower bound. In addition, the method is not directly applicable when λ x is constrained and when p(x|θ PGM) has additional latent variables (e.g., indicator variables z n in the mixture-model example). In summary, the method of BID10 might be difficult to implement and tune, and also difficult to generalize to cases when PGM is complex. In this paper, we propose an algorithm to simplify and generalize the algorithm of BID10. We propose structured inference networks (SIN) that incorporate the structure of the PGM part in the VAE inference network. Even when the graphical model contains a non-conjugate factor, SIN can preserve some structure of the model. We derive conditions under which SIN can enable efficient amortized inference by using stochastic gradients. We discuss many examples to illustrate the design of SIN for many types of PGM structures. Finally, we derive a VMP algorithm to perform natural-gradient variational inference on the PGM part while retaining the efficiency of the amortized inference on the DNN part. We start with the design of inference networks that incorporate the PGM structure into the inference network of VAE. We propose the following structured inference network (SIN) which consists of two types of factors, DISPLAYFORM0 The DNN factor here is similar to while the PGM factor is an exponential-family distribution which has a similar graph structure as the PGM prior p(x|θ PGM). The role of the DNN term is to enable flexibility while the role of the PGM term is to incorporate the model's PGM structure into the inference network. Both factors have their own parameters. φ NN is the parameter of DNN and φ PGM is the natural parameter of the PGM factor. The parameter set is denoted by φ:= {φ NN, φ PGM}.How should we choose the two factors? As we will show soon that, for fast amortized inference, these factors need to satisfy the following two conditions. The first condition is that the normalizing constant 3 log Z(φ) is easy to evaluate and differentiate. The second condition is that we can draw samples from SIN, i.e., x * (φ) ∼ q(x|y, φ) where we have denoted the sample by x * (φ) to show its dependence on φ. An additional desirable, although not necessary, feature is to be able to compute the gradient of x * (φ) by using the reparameterization trick. Now, we will show that given these two conditions we can easily perform amortized inference. We show that when the above two conditions are met, a stochastic gradient of the lower bound can be computed in a similar way as in VAE. For now, we assume that θ is a deterministic variable (we will relax this in the next section). The variational lower bound in this case can be written as follows: DISPLAYFORM1 The first term above is identical to the lower bound of the standard VAE, while the rest of the terms are different (shown in blue). The second term differs due to the PGM prior in the generative model. In VAE, p(x|θ PGM) is a standard normal, but here it is a structured PGM prior. The last two terms arise due to the PGM term in SIN. If we can compute the gradients of the last three terms and generate samples x * (φ) from SIN, we can perform amortized inference similar to VAE. Fortunately, the second and third terms are usually easy for PGMs, therefore we only require the gradient of Z(φ) to be easy to compute. This confirms the two conditions required for a fast amortized inference. The ing expressions for the stochastic gradients are shown below where we highlight in blue the additional gradient computations required on top of a VAE implementation (we also drop the explicit dependence of x * (φ) over φ for notational simplicity). DISPLAYFORM2 DISPLAYFORM3 The gradients of Z(φ) and x * (φ) might be cheap or costly depending on the type of PGM. For example, for LDS, these require a full inference through the model which costs O(N) computation and is infeasible for large N. However, for GMM, each x n can be independently sampled and therefore computations are independent of N. In general, if the latent variables in PGM are highly correlated (e.g., Gaussian process prior), then Bayesian inference is not computationally efficient and gradients are difficult to compute. In this paper, we do not consider such difficult cases and assume that Z(φ) and x * (φ) can be evaluated and differentiated cheaply. We now give many examples of SIN that meet the two conditions required for a fast amortized inference. When p(x|θ PGM) is a conjugate exponential-family distribution, choosing the two factors is a very easy task. In this case, we can let q(x|φ PGM) = p(x|φ PGM), i.e., the second factor is the same distribution as the PGM prior but with a different set of parameters φ PGM. To illustrate this, we give an example below when the PGM prior is a linear dynamical system. Example (SIN for Linear Dynamical System (LDS)): When y n is a time series, we can model the latent x n using an LDS defined as p(x|θ):= N (x 0 |µ 0, Σ 0) N n=1 N (x n |Ax n−1, Q), where A is the transition matrix, Q is the process-noise covariance, and µ 0 and Σ 0 are the mean and covariance of the initial distribution. Therefore, θ PGM:= {A, Q, µ 0, Σ 0}. In our inference network, we choose q(x|φ PGM) = p(x|φ PGM) as show below, where φ PGM:= {Ā,Q,μ 0,Σ 0} and, since our PGM is a Gaussian, we choose the DNN factor to be a Gaussian as well: DISPLAYFORM4 where m n:= m φNN (y n) and V n:= V φNN (y n) are mean and covariance parameterized by a DNN with parameter φ NN. The generative model and SIN are shown in FIG0, respectively. The above SIN is a conjugate model where the marginal likelihood and distributions can be computed in O(N) using the forward-backward algorithm, a.k.a. Kalman smoother BID1. We can also compute the gradient of Z(φ) as shown in BID13.When the PGM prior has additional latent variables, e.g., the GMM prior has cluster indicators z n, we might want to incorporate their structure in SIN. This is illustrate in the example below. Example (SIN for GMM prior): The prior shown in has an additional set of latent variables z n. To mimic this structure in SIN, we choose the PGM factor as shown below with parameters DISPLAYFORM5, while keeping the DNN part to be a Gaussian distribution similar to the LDS case: DISPLAYFORM6 The model and SIN are shown in FIG0 and 1b, respectively. Fortunately, due to conjugacy of the Gaussian and multinomial distributions, we can marginalize x n to get a closed-form expression for log Z(φ):= n log k N (m n |μ k, V n +Σ k)π k. We can sample from SIN by first sampling from the marginal q(z n = k|y, φ) ∝ N m n |μ k, V n +Σ k π k. Given z n, we can sample x n from the following conditional: DISPLAYFORM7 In all of the above examples, we are able to satisfy the two conditions even when we use the same structure as the model. However, this may not always be possible for all conditionally-conjugate exponential family distributions. However, we can still obtain samples from a tractable structured mean-field approximation using VMP. We illustrate this for the switching state-space model in Appendix A. In such cases, a drawback of our method is that we need to run VMP long enough to get a sample, very similar to the method of BID10. However, our gradients are simpler to compute than theirs. Their method requires gradients of λ * (θ, φ) which depends both on θ and φ (see Proposition 4.2 in BID10). In our case, we require gradient of Z(φ) which is independent of θ and therefore is simpler to implement. An advantage of our method over the method of BID10 is that our method can handle non-conjugate factors in the generative model. When the PGM prior contains some non-conjugate factors, we might replace them by their closest conjugate approximations while making sure that the inference network captures the useful structure present in the posterior distribution. We illustrate this on a Student's t mixture model. To handle outliers in the data, we might want to use the Student's t-mixture component in the mixture model shown in, i.e., we set p(x n |z n = k) = T (x n |µ k, Σ k, γ k) with mean µ k, scale matrix Σ k and degree of freedom γ k. The Student's t-distribution is not conjugate to the multinomial distribution, therefore, if we use it as the PGM factor in SIN, we will not be able to satisfy both conditions easily. Even though our model contains a t-distribution components, we can still use the SIN shown in that uses a GMM factor. We can therefore simplify inference by choosing an inference network which has a simpler form than the original model. In theory, one can do this even when all factors are non-conjugate, however, the approximation error might be quite large in some cases for this approximation to be useful. In our experiments, we tried this for non-linear dynamical system and found that capturing non-linearity was essential for dynamical systems that are extremely non-linear. Previously, we assumed θ PGM to be deterministic. In this section, we relax this condition and assume θ PGM to follow an exponential-family prior p(θ PGM |η PGM) with natural parameter η PGM. We derive a VMP algorithm to perform natural-gradient variational inference for θ PGM. Our algorithm works even when the PGM part contains non-conjugate factors, and it does not affect the efficiency of the amortized inference on the DNN part. We assume the following mean-field approximation: q(x, θ|y):= q(x|y, φ)q(θ PGM |λ PGM) where the first term is equal to SIN introduced in the previous section, and the second term is an exponential-family distribution with natural parameter λ PGM. For θ NN and φ, we will compute point estimates. We build upon the method of BID11 which is a generalization of VMP and stochastic variational inference (SVI). This method enables natural-gradient updates even when PGM contains non-conjugate factors. This method performs natural-gradient variational inference by using a mirror-descent update with the Kullback-Leibler (KL) divergence. To obtain natural-gradients with respect to the natural parameters of q, the mirror-descent needs to be performed in the mean parameter space. We will now derive a VMP algorithm using this method. We start by deriving the variational lower bound. The variational lower bound corresponding to the mean-field approximation can be expressed in terms of L SIN derived in the previous section. Compute q(x|y, φ) for SIN shown in either by using an exact expression or using VMP. DISPLAYFORM0 Sample x * ∼ q(x|y, φ), and compute ∇ φ Z and ∇ φ x *. Update λ PGM using the natural-gradient step given in. Update θ NN and φ using the gradients given in FORMULA8 - FORMULA0 with θ PGM ∼ q(θ PGM |λ PGM). 7: until ConvergenceWe will use a mirror-descent update with the KL divergence for q(θ PGM |λ PGM) because we want natural-gradient updates for it. For the rest of the parameters, we will use the usual Euclidean distance. We denote the mean parameter corresponding to λ PGM by µ PGM. Since q is a minimal exponential family, there is a one-to-one map between the mean and natural parameters, therefore we can reparameterize q such that q(θ PGM |λ PGM) = q(θ PGM |µ PGM). Denoting the values at iteration t with a superscript t and using Eq. 19 in BID11 with these divergences, we get: DISPLAYFORM0 DISPLAYFORM1 where β 1 to β 3 are scalars,, is an inner product, and ∇L t is the gradient at the value in iteration t. As shown by BID11, the maximization in FORMULA0 can be obtained in closed-form: DISPLAYFORM2 When the prior p(θ PGM |η PGM) is conjugate to p(x|θ PGM), the above step is equal to the SVI update of the global variables. The gradient itself is equal to the message received by θ PGM in a VMP algorithm, which is also the natural gradient with respect to λ PGM. When the prior is not conjugate, the gradient can be approximated either by using stochastic gradients or by using the reparameterization trick BID11. Therefore, this update enables natural-gradient update for PGMs that may contain both conjugate and non-conjugate factors. The update of the rest of the parameters can be done by using a stochastic-gradient method. This is because the solution of the update FORMULA0 is equal to a stochastic-gradient descent update (one can verify this by simplify taking the gradient and setting it to zero). We can compute the stochasticgradients by using a Monte Carlo estimate with a sample θ * DISPLAYFORM3 where θ *:= {θ * PGM, θ NN}. As discussed in the previous section, these gradients can be computed similar to VAE-like by using the gradients given in-. Therefore, for the DNN part we can perform amortized inference, and use a natural-gradient update for the PGM part using VMP.The final algorithm is outlined in Algorithm 1. Since our algorithm enables Structured, Amortized, and Natural-gradient (SAN) updates, we call it the SAN algorithm. Our updates conveniently separate the PGM and DNN computations. Step 3-6 operate on the PGM part, for which we can use existing implementation for the PGM.Step 7 operates on the DNN part, for which we can reuse VAE implementation. Our algorithm not only generalizes previous works, but also simplifies the implementation by enabling the reuse of the existing software. The main goal of our experiments is to show that our SAN algorithm gives similar to the method of BID10. For this reason, we apply our algorithm to the two examples considered in BID10, namely the latent GMM and latent LDS (see FIG0). In this section we discuss for latent GMM. An additional for LDS is included in Appendix C. Our show that, similar to the method of BID10 our algorithm can learn complex Even with 70% outliers, SAN-TMM performs better than SAN-GMM with 10% outliers. DISPLAYFORM0 Figure 3: Top row is for the Pinwheel dataset, while the bottom row is for the Auto dataset. Point clouds in the of each plot show the samples generated from the learned generative model, where each mixture component is shown with a different color and the color intensities are proportional to the probability of the mixture component. The points in the foreground show data samples which are colored according to the true labels. We use K = 10 mixture components to train all models. For the Auto dataset, we show only the first two principle components.representations with interpretable structures. The advantage of our method is that it is simpler and more general than the method of BID10.We compare to three baseline methods. The first method is the variational expectation-maximization (EM) algorithm applied to the standard Gaussian mixture model. We refer to this method as'GMM'. This method is a clustering method but does not use a DNN to do so. The second method is the VAE approach of BID12, which we refer to as'VAE'. This method uses a DNN but does not cluster the outputs or latent variables. The third method is the SVAE approach of BID10 applied to latent GMM shown in FIG0. This method uses both a DNN and a mixture model to cluster the latent variables. We refer to this as'SVAE'. We compare these methods to our SAN algorithm applied to latent GMM model. We refer to our method as'SAN'. All methods employ a Normal-Wishart prior over the GMM hyperparameters (see BID1 for details).We use two datasets. The first dataset is the synthetic two-dimensional Pinwheel dataset (N = 5000 and D = 2) used in BID10. The second dataset is the Auto dataset (N = 392 and D = 6, available in the UCI repository) which contains information about cars. The dataset also contains a five-class label which indicates the number of cylinders in a car. We use these labels to validate our . For both datasets we use 70% data for training and the rest for testing. For all methods, we tune the step-sizes, the number of mixture components, and the latent dimensionality on a validation set. We train the GMM baseline using a batch method, and, for VAE and SVAE, we use minibatches of size 64. DNNs in all models consist of two layers with 50 hidden units and an output layer of dimensionality 6 and 2 for the Auto and Pinwheel datasets, respectively. FIG1 and 2b compare the performances during training. In FIG1, we compare to SVAE and GMM, where we see that SAN converges faster than SVAE. As expected, both SVAE and SAN achieve similar performance upon convergence and perform better than GMM. In FIG1, we compare to VAE and GMM, and observe similar trends. The performance of GMM is represented as a constant because it converges after a few iterations already. We found that the implementation provided by BID10 does not perform well on the Auto dataset which is why we have not included it in the comparison. We also compared the test log-likelihoods and imputation error which show very similar trends. We omit these due to space constraints. In the of each plot in Figure 3, we show samples generated from the generative model. In the foreground, we show the data with the true labels. These labels were not used during training. The plots (a)- FORMULA29 show for the Pinwheel dataset, while plots FORMULA29 - FORMULA29 shows for the Auto dataset. For the Auto dataset, each label corresponds to the number of cylinders present in a car. We observe that SAN can learn meaningful clusters of the outputs. On the other hand, VAE does not have any mechanisms to cluster and, even though the generated samples match the data distribution, the are difficult to interpret. Finally, as expected, both SAN and VAE learn flexible patterns while GMM fails to do so. Therefore, SAN enables flexible models that are also easy to interpret. An advantage of our method over the method of BID10 is that our method applies even when PGM contains non-conjugate factors. Now, we discuss a for such a case. We consider the SIN for latent Student's t-mixture model (TMM) discussed in Section 3. The generative model contains the student's t-distribution as a non-conjugate factor, but our SIN replaces it with a Gaussian factor. When the data contains outliers, we expect the SIN for latent TMM to perform better than the SIN for latent GMM. To show this, we add artificial outliers to the Pinwheel dataset using a Gaussian distribution with a large variance. We fix the degree of freedom for the Student's t-distribution to 5. We test on four different levels of noise and report the test MSE averaged over three runs for each level. FIG1 shows a comparison of GMM, SAN on latent GMM, and SAN on latent TMM where we see that, as the noise level is increased, latent TMM's performance degrades slower than the other methods (note that the y-axis is in log-scale). Even with 70% of outliers, the latent TMM still performs better than the latent GMM with only 10% of outliers. This experiment illustrates that a conjugate SIN can be used for inference on a model with a non-conjugate factor. We propose an algorithm to simplify and generalize the algorithm of BID10 for models that contain both deep networks and graphical models. Our proposed VMP algorithm enables structured, amortized, and natural-gradient updates given that the structured inference networks satisfy two conditions. The two conditions derived in this paper generally hold for PGMs that do not force dense correlations in the latent variables x. However, it is not clear how to extend our method to models where this is the case, e.g., Gaussian process models. It is possible to use ideas from sparse Gaussian process models and we will investigate this in the future. An additional issue is that our are limited to small scale data. We found that it is non-trivial to implement a message-passing framework that goes well with the deep learning framework. We are going to pursue this direction in the future and investigate good platforms to integrate the capabilities of these two different flavors of algorithms. In SLDS, we introduce discrete variable z n ∈ {1, 2, . . ., K} that are sampled using a Markov chain: p(z n = i|z n−1 = j) = π ij such that π ij sum to 1 over all i given j. The transition for LDS is defined conditioned on z n: p(x n |x n−1, z n = i, θ PGM):= N (x n |A i x n−1, Q i) where A i and Q i are parameters for the i'th indicator. These two dynamics put together define the SLDS prior p(x, z|θ PGM). We can use the following SIN which uses the SLDS prior as the PGM factor but with parameters φ PGM instead of θ PGM. The expression for q(x, z|y, φ) is shown below: DISPLAYFORM0 Even though the above model is a conditionally-conjugate model, the partition function is not tractable and sampling is also not possible. However, we can use a structured mean-field approximation. First, we can combine the DNN factor with the Gaussian observation of SLDS factor and then use a mean-field approximation q(x, z|y, φ) ≈ q(x|λ x)q(z|λ z), e.g., using the method of BID5. This will give us a structured approximation where the edges between y n and x n and z n and z n−1 are maintained but x n and z n independent of each other. In this section we give detailed derivations for the SIN shown in. We derive the normalizing constant Z(φ) and show how to generate samples from SIN.We start by a simple rearrangement of SIN defined in: DISPLAYFORM0 DISPLAYFORM1 where the first step follows from the definition, the second step follows by taking the sum over k outside, and the third step is obtained by defining each component as a joint distribution over x n and the indicator variable z n.We will express this joint distribution as a multiplication of the marginal of z n and conditional of x n given z n. We will see that this will give us the expression for the normalizing constant, as well as a way to sample from SIN.We can simplify the joint distribution further as shown below. The first step follows from the definition. The second step is obtained by swapping m n and x n in the first term. The third step is obtained by completing the squares and expressing the first term as a distribution over x n (the second and third terms are independent of x n).q(x n, z n = k|y n, φ) ∝ N (x n |m n, V n)N (x n |μ k,Σ k)π k DISPLAYFORM2 where Σ Using the above we get the marginal of z n and conditional of x n given z n: DISPLAYFORM3 q(x n |z n = k, y n, φ):= N (x n | µ n, Σ n)The normalizing constant of the marginal of z n is obtained by simply summing over all k: DISPLAYFORM4 and since q(x n |z n = k, y n, φ) is already a normalized distribution, we can write the final expression for the SIN as follows: DISPLAYFORM5 q(x n |z n = k, y n, φ)q(z n = k|y n, φ)where components are defined in FORMULA1, FORMULA1, and. The normalizing constant is available in closed-form and we can sample z n first and then generate x n. This completes the derivation. In this experiment, we apply our SAN algorithm to the latent LDS discussed in Section 3. For comparison, we compare our method, Structured Variational Auto-Encoder (SVAE) BID10, and LDS on the Dot dataset used in BID10. Our show that our method achieves comparable performance to SVAE. For LDS, we perform batch learning for all model parameters using the EM algorithm. For SVAE and SAN, we perform mini-batch updates for all model parameters. We use the same neutral network architecture as in BID10, which contains two hidden layers with tanh activation function. We repeat our experiments 10 times and measure model performance in terms of the following mean absolute error for τ -steps ahead prediction. The error measures the absolute difference between the ground truth and the generative outputs by averaging across generated . 27) where N is the number of testing time series with T time steps, d is the dimensionality of observation y, and observation y * t+τ,n denotes the ground-truth at time step t + τ. From FIG4, we can observe that our method performs as good as SVAE and outperforms LDS. Our method is slightly robust than SVAE. In FIG5, there are generated images obtained from all methods. From FIG5, we also see that our method performs as good as SAVE and is able to recover the ground-truth observation.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyH9lbZAW
We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model.
"Modern deep neural networks have a large amount of weights, which make them difficult to deploy on (...TRUNCATED)
[0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
B1eHgu-Fim
"A simple modification to low-rank factorization that improves performances (in both image and langu(...TRUNCATED)
"Deep learning training accesses vast amounts of data at high velocity, posing challenges for datase(...TRUNCATED)
[0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
S1e0ZlHYDB
"We propose a simple, general, and space-efficient data format to accelerate deep learning training (...TRUNCATED)
"It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when sem(...TRUNCATED)
[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
rylUOn4Yvr
"ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PE(...TRUNCATED)
"Generative Adversarial Networks (GANs) have achieved remarkable in the task of generating realistic(...TRUNCATED)
[0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
ryj38zWRb
"Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet gener(...TRUNCATED)
"In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical pe(...TRUNCATED)
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED)
HJxhWa4KDr
Equip MMD GANs with a new random-forest kernel.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card

Models trained or fine-tuned on Blaise-g/scitldr